Quality & Governance News

Researchers Call for ‘Distributed Approach’ to Clinical AI Regulation

Public health researchers argue that the centralized regulation of artificial intelligence at the national level is not sufficient for safety, efficacy, and equity.

3 rows of 6 lightbulbs on a teal background. All the lightbulbs are unlit except for the second one from the right in the bottom row

Source: Getty Images

By Shania Kennedy

- Researchers argue that the national, centralized regulation of clinical artificial intelligence (AI) is not sufficient and instead propose a hybrid model of centralized and decentralized regulation.

In an opinion piece published in PLOS Digital Health, public health researchers at Harvard note that the increase in clinical AI applications, combined with the need to adapt applications to account for differences between local health systems, creates a significant challenge for regulators.

Currently, the US Food and Drug Administration (FDA) regulates clinical AI under the classification of software-based medical devices. Medical device approval is typically obtained via premarket clearance, de novo classification, or premarket approval. In practice, this usually involves the approval of a “static” model, meaning that any change in data, algorithm, or intended use after initial approval requires reapplication for approval. To receive approval, developers must demonstrate a model’s performance on an appropriately heterogeneous dataset.

To improve parts of this process, the FDA has proposed a regulatory framework focused on modifications to clinical AI within the context of Software-as-a-Medical-Device (SaMD). This framework expands on the existing approach by adding new post-authorization considerations that are expected to be of greater importance to clinical AI, including recommendations for predetermined change control plans. These plans require the manufacturers of algorithms to specify which parameters of the application they intend to modify in the future and the intended methodology to operationalize those changes.

The FDA’s approval processes are based on approaches designed to assess the safety and efficacy of drugs and conventional medical devices, which creates four unique challenges for centralized AI regulation at scale. The first is that AI is easier to develop than a new drug or conventional medical device, which could result in higher volumes of AI submissions that regulators are not equipped to handle.

This volume problem is compounded by the second challenge for centralized regulation at scale, which is that AI technologies should change in response to changes in underlying data. The third challenge is that many AI algorithms are not equipped to determine causal relationships. This means that the reason behind an implemented application’s failure cannot always be predicted or determined, especially within the context of differing data and use cases.

The final challenge is that AI technology regulated in isolation cannot account for socio-technical factors at the local level, which ultimately determines the outcomes generated by the technology.

Thus, centralized regulation alone is insufficient to adequately ensure the safety, efficacy, and equity of implemented clinical AI systems, the researchers concluded. Instead, they suggest supplementing the existing centralized regulatory approach with a decentralized approach that can be deployed locally.

Under this hybrid approach, decentralized regulation would be the default for most clinical AI applications, with centralized regulation reserved for the highest-risk tasks. These high-risk tasks are those for which inference is entirely automated without clinician review, those with a high potential to negatively impact patient health, or those designed to be applied on a national scale, such as specific screening programs.

While such a hybrid approach isn’t currently feasible, the authors identified five prerequisites and institutional roles that would be needed to establish the approach.

The first is a specially trained workforce tasked with overseeing clinical AI evaluation, deployment, continuous monitoring, and re-calibration. Additionally, this workforce would lead the training of clinicians and data scientists in topics such as human-computer interaction, decision support implementation science, and algorithmic fairness. This workforce would need to be paired with an oversight department to help protect patient confidentiality, ensure safe model performance, and prevent adverse impacts on disparities.

The second prerequisite is an accountability framework, which is necessary because there is almost no case law on clinician liability involving medical AI.

The third and fourth prerequisites are open data and AI registries, enhancing data sharing, and outcome monitoring. The final prerequisite is public engagement, which the authors argue would help address inequities and bolster the transparency of AI in healthcare.

The long-term growth of AI in healthcare will be heavily impacted by clinician and patient trust in the technology, according to the researchers. To develop this trust, a hybrid model for clinical AI regulation is needed to address concerns surrounding safety, fairness, and effectiveness.