Quality & Governance News

New Healthcare AI Framework Incorporates Medical Knowledge, Values

A newly published artificial intelligence framework advocates for a “sociotechnical” approach to advance the technology’s integration into healthcare.

ethical AI in healthcare

Source: Getty Images

By Shania Kennedy

- A novel normative framework for healthcare artificial intelligence (AI), described in a recent issue of Patterns, asserts that medical knowledge, procedures, practices, and values should be considered when integrating the technology into clinical settings.

The approach—developed by researchers from Carnegie Mellon University, The Hospital for Sick Children, the Dalla Lana School of Public Health, Columbia University, and the University of Toronto—is designed to help stakeholders holistically evaluate AI in healthcare.

“Regulatory guidelines and institutional approaches have focused narrowly on the performance of AI tools, neglecting knowledge, practices, and procedures necessary to integrate the model within the larger social systems of medical practice,” explained co-author Alex John London, PhD, the K&L Gates Professor of Ethics and Computational Technologies at Carnegie Mellon, in a press release. “Tools are not neutral—they reflect our values—so how they work reflects the people, processes, and environments in which they are put to work.”

The framework advocates for healthcare AI to be viewed as part of a larger “intervention ensemble,” or a set of practices, procedures, and knowledge that enable care delivery. This conceptual shift characterizes AI models as “sociotechnical systems,” a term that describes how the tool’s computational functioning reflects the values and processes of the people and environment surrounding it.

By viewing healthcare AI in this way, the researchers hope that the framework can help advance responsible implementation of these tools.

The authors noted that previous studies and frameworks exploring ethical AI integration in healthcare have been largely descriptive, focusing on how human systems and AI systems interact.

Conversely, their framework was developed to take a more proactive approach by guiding stakeholders on how to integrate AI tools into workflows with the highest potential to benefit patients.

The researchers indicated that their framework can be utilized to drive institutional insights and to guide regulation, in addition to appraising and evaluating already-deployed health AI tools to ensure that they are being used ethically and responsibly.

To demonstrate how their approach can be used, the authors applied it to a case study of the IDx-DR system, a well-known AI tool designed to screen for and detect mild diabetic retinopathy. For this illustration, the researchers defined the intervention ensemble for the system to help connect the intended benefits and goals of the tool to the evidence base for the empirical claims surrounding it.

“Only a small majority of models evaluated through clinical trials have shown a net benefit,” said co-author Melissa McCradden, PhD, a bioethicist at the Hospital for Sick Children and assistant professor of Clinical and Public Health at the Dalla Lana School of Public Health. “We hope our proposed framework lends precision to evaluation and interests regulatory bodies exploring the kinds of evidence needed to support the oversight of AI systems.”

As interest in AI grows across the healthcare sector, researchers and other stakeholders are increasingly concerned with how these tools can be developed and deployed responsibly.

This week, the American Medical Association (AMA), published seven principles to guide the development, deployment, and use of healthcare augmented intelligence, also called artificial intelligence.

The guidance builds on existing AI policies and seeks to support the establishment of a national governance structure for health AI.

The principles also act as a cornerstone for the AMA’s advocacy strategy around these technologies, an approach that has thus far prioritized the implementation of national policies to ensure health AI is ethical, equitable, responsible, and transparent.