Quality & Governance News

More Guidance Needed to Curb Discrimination by Clinical Algorithm Use

Researchers argue that the Department of Health and Human Services’ efforts to address discrimination do not provide enough guidance for algorithm users.

red, orange, yellow, green, blue and pink wooden blocks with different colored stick figures on them, scattered across a wooden table

Source: Getty Images

By Shania Kennedy

- In a recent viewpoint published in JAMA, researchers explored the challenges of curbing discrimination by a clinical algorithm, arguing that the US Department of Health and Human Services’ (HHS) attempts to address the issue don’t give sufficient guidance to providers using the algorithms and don’t reflect some of the more pressing challenges in evaluating the tools for discrimination.

The authors from Harvard Law School and Penn State Dickinson Law stated that many clinical algorithms are flawed, either because they incorporate bias by design or because they are trained on biased data sets, making it important to address discrimination by algorithms. Last year, HHS announced its intention to support this effort by tackling the use of biased algorithms in healthcare decision-making and telehealth services through a proposed rule on Section 1557 of the Affordable Care Act (ACA).

Section 1557 prohibits discrimination based on race, sex, color, national origin, age, or disability by certain health entities, such as most health plans, hospitals, and physician groups participating in Medicare and Medicaid.

The researchers explained that penalties for violating Section 1557 could be significant, ranging from an investigation by the Office for Civil Rights (OCR), suspension or termination of federal financial assistance from the HHS, and compensatory damages, as a result of the statute’s private right of action provision, which allows individuals to sue healthcare entities for discrimination.

In 2022, the Biden administration proposed a rule to update Section 1557, known as §92.210. The proposed rule prohibits discriminatory clinical algorithms in decision-making, stating that “a covered entity must not discriminate against any individual on the basis of race, color, national origin, sex, age, or disability through the use of clinical algorithms in its decision-making.”

READ MORE: Ethical Artificial Intelligence Standards To Improve Patient Outcomes

The authors of the viewpoint article argued that though the increased use of clinical algorithms in medical practice may warrant an extension of Section 1577’s reach to reflect modern, digital health-driven practices, HHS must provide more guidance for algorithm users outside of the proposed rule and coordinate with the Food and Drug Administration (FDA) on how to best evaluate clinical algorithms.

The researchers pointed out that “[t]he intent of proposed §92.210 is not to prohibit or hinder the use of clinical algorithms,” but this intention may fall short because of the proposed rule’s other mandates, including one concerning liability for discrimination. HHS states that a covered entity wouldn’t be liable for a clinical algorithm it did not develop, but §92.210 indicates that the entity might be liable for a decision that relies on a biased clinical algorithm.

Because the inner workings of an algorithm can be difficult to fully understand, the proposed rule puts the burden on clinicians to be well-versed enough in data and computer science to evaluate, oversee, and correct for potential biases in algorithms, the authors explained.

The proposed rule focuses on clinical algorithms that do not leverage artificial intelligence (AI) or machine learning (ML), as these are more complicated than others, showing that the HHS understands the challenges related to using algorithms responsibly. The agency further attempts to acknowledge the challenges by urging clinicians to consult with the American Medical Association (AMA) framework for trustworthy augmented intelligence in healthcare.

However, the researchers argued that doing so is still beyond the capabilities of most clinicians outside of the largest healthcare systems. This raises equity issues because smaller organizations and providers will have to choose between exposing themselves to higher liability or forgoing the use of clinical algorithms to remain competitive.

READ MORE: Responsible AI Deployment in Healthcare Requires Collaboration

To combat this, the researchers suggested that HHS provide additional leadership in this area in two ways.

The first is by establishing a “safe harbor” for healthcare professionals. Under this “safe harbor” provision, if a clinician can prove that they followed specific policies and recommendations, like the AMA framework, they should not be held liable under §92.210.

The second method would require HHS to coordinate its efforts with the FDA and clarify how best to evaluate clinical algorithms by developing standards outlining how to assess an algorithm’s software for bias or identify biases in decision-making algorithms. The FDA should also be responsible for creating bias-mitigating standards for AI/ML developers to foster transparency and clarity on what the agency expects during the premarket submission of medical devices, the authors continued.

Despite these potential solutions, the researchers stated that putting all liability on the healthcare professional will inhibit algorithmic innovation in healthcare while reinforcing equity issues. They concluded that advocacy is needed to encourage an adequate balance of interests, risks, and responsibilities among stakeholders and ensure that bias already present in the algorithm design process is reduced as much as possible before the tools are released on the market.