Journal Article
Historically, tech leaders have assumed that the better a human can understand an algorithm, the less accurate it will be. But is there always a tradeoff between accuracy and explainability?
The authors tested a wide array of AI models on nearly 100 representative datasets, and they found that 70% of the time, a more-explainable model could be used without sacrificing accuracy. Moreover, in many applications, opaque models come with substantial downsides related to bias, equity, and user trust.
As such, the authors argue that organizations should think carefully before integrating unexplainable, “black box” AI tools into their operations, and take steps to help determine whether these models are really worth the risk before moving forward.
Faculty
Professor of Technology and Business