Working Paper
Explainability is at the core of several phenomena related to the integration of information held by different actors (including the case involving machine intelligence and humans).
The authors take as a premise that to explain something to someone requires finding a link between something they already know and the thing that is yet to be explained. They propose a formalization of this process as traversal across overlapping knowledge graphs, with the explainer facing an optimal stopping problem.
This conceptualization helps to understand the barriers to explainability in terms of the relationships between the two graphs, and the costs and benefits of continuing search for a path that links what is to be explained to what is already in both graphs.
Faculty
Professor of Strategy
Professor of Decision Sciences