Quality & Governance News

Researchers Call for Outcome-Centric Approach to Health AI Regulation

University of California San Diego researchers argue that healthcare AI regulations should require developers to demonstrate how these tools impact patient outcomes.

healthcare artificial intelligence AI regulation

Source: Getty Images

By Shania Kennedy

- In a recent viewpoint published in the Journal of the American Medical Association (JAMA), researchers from the University of California San Diego (UCSD) argued that the White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI) does not sufficiently address the role that patient outcomes should play in healthcare AI regulation.

The executive order establishes standards and guardrails for AI stakeholders across industries in the United States, and it is set to have a potentially significant impact on healthcare. The order’s healthcare mandates focus heavily on developing regulatory strategies to promote safety, quality, privacy, and equity in the deployment of these technologies.

While these are valuable metrics that play a role in improving healthcare, the research team asserted that this approach is limited in that it does not appropriately focus on patient outcomes.

“The goal of medicine is to save lives,” explained senior author Davey Smith, MD, head of the Division of Infectious Disease and Global Public Health at UCSD School of Medicine and co-director of the university’s Altman Clinical and Translational Research Institute, in a news release. “AI tools should prove clinically significant improvements in patient outcomes before they are widely adopted.”

Despite this, the authors noted, the executive order is more focused on what they call “process-centric” regulations. These regulations influence the processes used by product developers to mandate regulatory compliance.

READ MORE: Researchers Call for ‘Distributed Approach’ to Clinical AI Regulation

“Outcomes-centric” regulations, on the other hand, require developers to demonstrate that their product is having its intended clinical impact. Often, regulators observing the healthcare industry use process-centric regulations to mitigate problems that result in poorer patient outcomes.

The researchers further explained that in some cases, like pharmaceutical manufacturing, companies are required to follow process-centric regulations because doing so results in improved outcomes for patients using their products.

However, the healthcare AI market requires a different approach, as problems in other healthcare verticals are often treated as “teachable moments” to guide regulatory efforts. However, the newness of many AI technologies in healthcare makes these moments rare for the time being, the authors asserted.

They further noted that process-centric regulations for healthcare AI would be challenging to implement because of the technology’s rapid pace of development and increasing complexity. These qualities could make process-centric regulations quickly outdated or erroneous.

The research team also emphasized that the executive order does not effectively take into account existing Food and Drug Administration (FDA) regulatory pathways that use patient outcome assessments to guide approval for drugs and medical devices.

READ MORE: Value of an Evidence-Based AI Development and Deployment Approach

The authors illustrated the pitfalls of a process-centric regulatory approach to health AI using early warning systems for sepsis, which are designed to predict which patients will develop the condition before it becomes serious and potentially life-threatening.

However, a 2021 study evaluating the widely-deployed Epic Sepsis Model found that the tool was unable to flag 67 percent of patients who later developed the condition, and of the over 2,500 patients who did develop sepsis, only 7 percent who had not already received early treatment were identified.

“Would hospital administrators have as frequently implemented this system if rigorous performance assessments were required by regulators before the system was marketed? Given there are many early warning systems for sepsis, how will hospital administrators know which system to implement without regulators requiring comparative outcomes data?” the authors wrote.

They argued that an outcomes-centric approach could have a significant impact in preventing problems like these from arising in future healthcare AI tools.

“We are calling for a revision to the White House Executive Order that prioritizes patient outcomes when regulating AI products,” said John W. Ayers, PhD, deputy director of informatics in the Altman Clinical and Translational Research Institute. “Similar to pharmaceutical products, AI tools that impact patient care should be evaluated by federal agencies for how they improve patients’ feeling, function, and survival.”

READ MORE: How Will Biden’s Executive Order on Trustworthy AI Impact Healthcare?

The authors indicated that an outcome-centric approach should require companies to demonstrate that their AI models generate clinically relevant differences in patient outcomes before they can be brought to market.

“We believe AI regulatory assessments should be grounded in clinical evidence regarding how patients feel, function, or survive in rigorously designed studies, such as randomized clinical trials, which is consistent with regulatory standards applied to new drugs that also require a net clinically meaningful improvement in patient outcomes compared with a placebo,” they noted.

The research team stated that regulatory stakeholders already have levers to implement these outcome-centric approaches, such as the federal certification requirements for electronic health records (EHRs) under the Health Information Technology for Economic and Clinical Health (HITECH) Act.

However, the authors recognized that their outcome-centric approach could create barriers for both regulators and developers. They posited that to address these impediments, a dedicated federal agency to facilitate clinical AI evaluation may be needed to create rules related to digital health trial registry, standards, and approval mechanisms.

Despite this, the researchers highlighted that AI models can potentially be assessed more quickly than drugs, as they are digitally delivered and do not require the iterative study phase design typically needed for drug trials.

This allows healthcare AI to be regulated effectively to prioritize patient outcomes without stifling innovation, the authors concluded.