Skip to main content

Faculty & Research

Close

Explanation Seeking and Anomalous Recommendation Adherence in Human-To-Human Versus Human-To-Artificial Intelligence Interactions

Journal Article
The use of artificial intelligence (AI) in operational decision-making is growing, but individuals can display algorithm aversion, preventing adherence to AI system recommendations - even when the system outperforms human decision-makers. Understanding why such algorithm aversion occurs and how to reduce it is important to ensure AI is fully leveraged. While the ability to seek an explanation from an AI may be a promising approach to mitigate this aversion, there is conflicting evidence on their benefits. Based on several behavioral theories, including Bayesian choice, loss aversion, and sunk cost avoidance, the authors hypothesize that if a recommendation is perceived as an anomalous loss, it will decrease recommendation adherence; however, the effect will be mediated by explanations and differ depending on whether the advisor providing the recommendation and explanation is a human or an AI. The authors conducted a survey-based lab experiment set in the online rental market space and found that presenting a recommendation as a loss anomaly significantly reduces adherence compared to presenting it as a gain, however, this negative effect can be dampened if the advisor is an AI. The authors find explanation-seeking has a limited impact on adherence, even after considering the influence of the advisor; the authors discuss the managerial and theoretical implications of these findings.
Faculty

Visiting Professor of Decision Sciences