Skip to main content

Continuous monitoring is necessary to reduce risk in AI and machine learning-based medical technology

Continuous monitoring is necessary to reduce risk in AI and machine learning-based medical technology

Continuous monitoring is necessary to reduce risk in AI and machine learning-based medical technology

Artificial intelligence and machine learning (AI/ML) are increasingly transforming the healthcare sector. From spotting malignant tumours to reading CT scans and mammograms, AI/ML-based technology is faster and more accurate than traditional devices – or even the best doctors. But along with the benefits come new risks and regulatory challenges.

In their latest article Algorithms on regulatory lockdown in medicine recently published in Science, Boris Babic, INSEAD Assistant Professor of Decision Sciences; Theodoros Evgeniou, INSEAD Professor of Decision Sciences and Technology Management; Sara Gerke, Research Fellow at Harvard Law School’s Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics; and I. Glenn Cohen, Professor at Harvard Law School and Faculty Director at the Petrie-Flom Center look at the new challenges facing regulators as they navigate the unfamiliar pathways of AI/ML.

They consider the questions: What new risks do we face as AI/ML devices are developed and implemented? How should they be managed? What factors do regulators need to focus on to ensure maximum value at minimal risk?

Until now regulatory bodies like the U.S. Food and Drug Administration (FDA) have approved medical AI/ML-based software with “locked algorithms” - that is algorithms that provide the same result each time and do not change with use.  However, a key strength and potential benefit from most AI/ML technology is derived from its ability to evolve as the model learns in response to new data. These “adaptive algorithms”, made possible because of AI/ML, create what is in essence a learning healthcare system, in which the boundaries between research and practice are porous.  

Given the significant value of this adaptive system, a fundamental question for regulators today is whether authorisation should be limited to the version of technology that was submitted and evaluated as being safe and effective, or whether they permit the marketing of an algorithm where greater value is to be found in the technology’s ability to learn and adapt to new conditions.

The authors take an in-depth look at the risks associated with this update problem, considering the specific areas which require focus and ways in which the challenges could be addressed.

The key to strong regulation, they say, is to prioritise continuous risk monitoring.

 “To manage the risks, regulators should focus particularly on continuous monitoring and risk assessment, and less on planning for future algorithm changes,” say the authors.

As regulators move forward, the authors recommend they develop new processes to continuously monitor, identify, and manage associated risks. They suggest key elements that could help with this, and which may in the future themselves be automated using AI/ML – possibly having AI/ML systems monitoring each other.

While the paper draws largely from the FDA’s experience in regulating biomedical technology, the lessons and examples have broad relevance as other countries consider how they shape their associated regulatory architecture. They are also important and relevant for any business that develops AI/ML embedded products and services, from automotive, to insurance, financials, energy, and increasingly many others. Executives in all organisations have a lot to learn about managing new AI/ML risks from how regulators think about them today.

Our goal is to emphasise the risks that can arise from unanticipated changes in how medical AI/ML systems react or adapt to their environments,” say the authors, warning that, “Subtle, often unrecognised parametric updates or new types of data can cause large and costly mistakes.”

About INSEAD, The Business School for the World

As one of the world’s leading and largest graduate business schools, INSEAD brings together people, cultures and ideas to develop responsible leaders who transform business and society. Our research, teaching and partnerships reflect this global perspective and cultural diversity.

With locations in Europe (France), Asia (Singapore), the Middle East (Abu Dhabi), and now North America (San Francisco), INSEAD's business education and research spans four regions. Our 162 renowned Faculty members from 40 countries inspire more than 1,300 degree participants annually in our Master in Management,  MBAGlobal Executive MBA, Specialised Master’s degrees (Executive Master in Finance and Executive Master in Change) and PhD programmes. In addition, more than 10,000 executives participate in INSEAD Executive Education programmes each year.

INSEAD continues to conduct cutting-edge research and innovate across all our programmes. We provide business leaders with the knowledge and awareness to operate anywhere. Our core values drive academic excellence and serve the global community as The Business School for the World.

Contacts for press: 

Aileen Huang
Tel +65 9008 3812
Email: [email protected]
Cheryl Ng
Tel +65 8750 0788
Email: [email protected]
Gwenaelle Hennequin
Tel +33 6 15 12 10 86
Email: [email protected]