Skip to main content

Faculty & Research

Close

Tool and Tutor? Experimental Evidence from AI Deployment in Cancer Diagnosis (Revision 1 )

Working Paper
Numerous countries globally face shortages of medical experts, deepening inequalities in access to healthcare. Artificial Intelligence (AI)-based diagnostic tools hold considerable promise to tackle this challenge by enabling even novices to deliver expert-level medical services. However, reliance on AI for task completion may hinder the learning required for novices to develop expertise. The authors thus explore whether AI-based diagnostic tools can be used to enhance not only performance but also learning in the context of lung cancer diagnosis. They examine the distinct effects of AI input during training (i.e., learning how to diagnose) versus in practice (i.e., completing diagnostic tasks) on novice medical professionals’ performance. In two field experiments, 576 medical students were randomly assigned across conditions, manipulating the access to AI input during their training, during a test of their diagnostic capabilities, or both. During practice, participants diagnosed potential lung cancer cases using chest CT scans, and their diagnoses were evaluated against the ground truth obtained through histopathological examinations. Study 1 (N = 336) revealed that AI input in training alone improved human diagnostic accuracy by 3.2 percentage points over the control, while AI input during practice alone increased human accuracy by 7.9 percent-age points. Combined deployment in both training and practice yielded an improvement of 13.7 percentage points—significantly exceeding either approach alone. Study 2 (N = 240) showed that AI input in practice alone improved accuracy in subsequent practice, unaided by AI, by 9.9 percentage points over the control. Even minimally informative AI input in training improved diagnostic accuracy by 5.3 percentage points over the control. These results reveal AI’s dual role: As a tool, it could rapidly improve novices’ performance; as a “tutor,” it could enhance their capabilities for independent diagnoses unaided by AI. We propose that using AI tools need not always lead to human skill decay or stagnation. Indeed, when the tool provides input that enhances humans’ mental models relevant to the task (rather than merely completing the task for them), it could enhance humans’ independent capabilities for task completion.
Faculty

Professor of Strategy