Journal Article
Existing Explainable AI (XAI) approaches, such as the widely used SHAP values or counterfactual (CF) explanations, are arguably often too technical for users to understand and act upon. To enhance comprehension of explanations of AI decisions and the overall user experience, the authors introduce XAIstories, which leverage Large Language Models (LLMs) to provide narratives about how AI predictions are made: SHAPstories based on SHAP and CFstories on CF explanations.
The authors study the impact of their approach on users’ experience and understanding of AI predictions. Their results are striking: over 90% of the surveyed general audience finds the narratives generated by SHAPstories convincing, and over 78% for CFstories, in a tabular data experiment. More than 75% of the respondents in an image experiment find CFstories more or equally convincing as their own crafted stories.
The authors also find that the generated stories help users to more accurately summarize and understand AI decisions than they do when only SHAP values are provided. The results indicate that combining LLM generated stories with current XAI methods is a promising and impactful research direction.
Faculty
Professor of Technology and Business