Quality & Governance News

More Legal Clarification Needed For Clinical Algorithm Development, Use

Researchers argue that without further legal clarity, recent directives from the HHS and FDA around the use and development of clinical algorithms may worsen patient outcomes.

clinical algorithm regulation

Source: Getty Images

By Shania Kennedy

- Researchers explored the intersection of clinical algorithms, anti-discrimination laws, and medical device regulation in a JAMA viewpoint published this month, arguing that recent directives from the US Department of Health and Human Services (HHS) and Food and Drug Administration (FDA) on the fairness, accuracy, and transparency of clinical algorithms may stifle research and worsen patient outcomes.

The authors noted that clinical algorithms include both complex, automated tools, such as sepsis alert systems, and simpler tools for risk calculation, predictions, and clinical scoring. These algorithms have the potential to help reduce implicit biases and medical errors that can occur during clinical decision-making, but they can exacerbate disparities and impact patient safety if insufficiently validated.

Currently, no binding, comprehensive guidelines exist for the development, validation, and remediation of these algorithms, making addressing biases or inaccuracies legally unenforceable and voluntary, the authors stated. The FDA and HHS have recently taken steps toward creating such guidelines, including releasing new guidance for AI-driven clinical decision support tools and a proposed rule to update Section 1557 of the Affordable Care Act (ACA), known as §92.210.

Section 1557 of the Affordable Care Act makes it unlawful for healthcare professionals who receive federal funding to discriminate based on protected traits, such as race and sex. The proposed rule would extend these anti-discrimination requirements to clinical algorithms.

The authors point out that the HHS does not define illegal algorithmic discrimination in the proposed rule, instead opting to provide examples where a potential “discrimination concern” could arise.

Another research letter published recently criticized this move, explaining that the proposed rule places liability on healthcare providers and entities for the discrimination that may result from a clinical decision based on a biased algorithm.

The authors of the current viewpoint echoed these sentiments, noting that violations of Section 1557 can result in lawsuits, funding loss, and federal enforcement action against healthcare entities and providers. Further, the lack of additional legal clarity in these directives may cause unintended harm because theymay conflate “differential” and “discriminatory.”

Differential care refers to “understanding and modeling how clinical presentation, risk, and prognosis differ by patient characteristics,” the researchers explained. They note that it does not inherently mean that the care is discriminatory. Rather, differential care is key to evidence-based medicine.

Proving that discrimination has occurred should require both the demonstration of observed differences and the evaluation of whether differential care would advance or undermine fairness and equity, they argued. The HHS proposed rule does not elaborate on what constitutes fairness and equity under its guidelines, which leaves legal uncertainty for clinicians and healthcare entities.

Without additional clarity, health systems may hesitate to engage with clinical algorithms.

The authors state that "the most risk-averse would discontinue using clinical algorithms that incorporate any Section 1557–protected traits, to the ultimate detriment of evidence-based medicine and patient care; or others (and likely most) would heed the [HHS’] warning not to ‘overly rely upon a clinical algorithm,’ and would simply add disclaimers that algorithmic output does not replace clinical judgment, thereby reducing transparency in clinical decision-making while doing little to reduce actual care disparities."

To avoid the potential harm associated with these outcomes, the authors suggested that the FDA exercise enforcement discretion for interpretable clinical algorithms aimed at non–life-threatening and non–time-critical conditions and make that enforcement intention explicit for stakeholders.

The researchers also discouraged HHS from regulating clinical algorithms under Section 1557 until there is greater consensus about when algorithm-guided differential care becomes unlawful discrimination. Further, they encouraged the agency to explicitly state that differential care does not necessarily equal discriminatory care and outline available defenses for alleged violations.

The authors concluded that these actions have the potential to help realize immediate regulatory benefits without inhibiting epidemiological research or hindering evidence-based medicine.