Skip to main content

Faculty & Research


Artificial Intelligence, Trust, and Perceptions of Agency

Journal Article
Modern artificial intelligence (AI) technologies based on deep learning architectures are often perceived as agentic to varying degrees - typically, as more agentic than other technologies but less agentic than humans. The authors theorize how different levels of perceived agency of AI affect human trust in AI The authors do so by investigating three causal pathways. First, an AI (and its designer) perceived as more agentic will be seen as more capable, and therefore will be perceived as more trustworthy. Second, the more the AI is perceived as agentic, the more important are trustworthiness perceptions about the AI relative to those about its designer. Third, because of betrayal aversion, the anticipated psychological cost of the AI violating trust increases with how agentic it is perceived to be. These causal pathways imply, perhaps counterintuitively, that making an AI appear more agentic may increase or decrease the trust that humans place in it: success at meeting the Turing test may go hand in hand with a decrease of trust in AI. The authors formulate propositions linking agency perceptions to trust in AI, by exploiting variations in the context in which the human–AI interaction occurs and the dynamics of trust updating.

Professor of Strategy