Journal Article
Predictive systems based on high-dimensional behavioral and textual data have serious comprehensibility and transparency issues: linear models require investigating thousands of coefficients, while the opaqueness of nonlinear models makes things worse.
Counterfactual explanations are becoming increasingly popular for generating insight into model predictions. This study aligns the recently proposed linear interpretable model-agnostic explainer and Shapley additive explanations with the notion of counterfactual explanations, and empirically compares the effectiveness and efficiency of these novel algorithms against a model-agnostic heuristic search algorithm for finding evidence counterfactuals using 13 behavioral and textual data sets.
The authors show that different search methods have different strengths, and importantly, that there is much room for future research.
Faculty
Professor of Decision Sciences and Technology Management