Skip to main content

Faculty & Research

Close

Artificial Intelligence, Trust, and Perceptions of Agency (Revision 1 )

Working Paper
The literature on trust among humans assumes that trustees are viewed as having agency (i.e., they display the capacity to think, plan and act), else trust is undefined. In contrast, the literature on confidence in technology does not require this assumption about the technologies we make ourselves vulnerable to (thus the term “confidence” rather than the term “trust” when applied to technology). Modern artificial intelligence (AI) technologies based on deep learning architectures are often perceived as agentic to varying degrees—typically as more agentic than other technologies but less than humans. The authors theorize how different levels of perceived agency of AI affect human trust in AI. They do so by investigating three mechanisms. First, a more agentic seeming AI (and its designer) will appear more able to execute relevant tasks, and therefore more trustworthy. Second, the more agentic seeming the AI, the more important are trustworthiness perceptions about the AI relative to those about its designer. Third, because of betrayal aversion, the anticipated psychological cost of the AI violating trust increases with how agentic it seems to be. These mechanisms imply, perhaps counterintuitively, that making an AI appear more agentic may increase or decrease the trust that humans place in it: success at meeting the Turing test may go hand in hand with a decrease of trust in AI.
Faculty

Professor of Strategy