Features

How Does the White House AI Bill of Rights Apply to Healthcare?

Experts from Mayo Clinic Platform and DLA Piper weigh in on how the White House’s Blueprint for an AI Bill of Rights may impact healthcare and health AI regulation.

Source: Getty Images

- In October, the White House unveiled its Blueprint for an AI Bill of Rights, providing guidelines for the design, use, and deployment of AI-based tools to protect Americans from harm as the use of these technologies continues to grow in multiple industries. This has raised questions about how the blueprint applies to healthcare.

The blueprint outlines five core principles: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration, and fallback.

The guidelines are designed to serve as practical guidance for the US government, technology companies, researchers, and other stakeholders, but the blueprint is nonbinding and does not constitute regulatory policy. The guidelines are meant to protect the public from the potential harms associated with using automated systems. Amid the rise of healthcare automation, the blueprint may have repercussions for efforts to ensure the ethical use of health AI.

To discuss these and other implications of the blueprint for healthcare, HealthITAnalytics spoke with Danny Tobey, MD, JD, partner at global law firm DLA Piper, and John Halamka, MD, president of Mayo Clinic Platform and a representative for the Coalition for Health AI (CHAI).

APPLYING THE BLUEPRINT’S ETHICS TO HEALTHCARE

According to Tobey, the guidelines listed in the Blueprint for an AI Bill of Rights are essentially ethical considerations to protect AI and automation users from harm. As such, its application in healthcare raises challenges related to instilling ethical standards in health AI use.

“In my experience, the biggest challenge is that healthcare is so personalized and patient-specific,” he explained. “So, when you start with very broad ethical principles like fairness or accountability, they sound great, but what does that mean for a particular patient with a particular disease? We're in the era of precision medicine, but our agreement about AI ethics is really far from the level of precision that we apply to patients.”

This lack of agreement could have far-reaching effects on patient access and health equity, making real-world applications of ethical AI guidelines in healthcare more crucial yet increasingly difficult.

 “I think we're going to have a lot of hard work bringing those feel-good principles down to earth and really debating what those mean when you're dealing with real patients, and you're affecting things like deciding who gets access to limited medical resources,” Tobey stated. “It's great to say we need to be fair, but we've been debating as human beings for 2,000 years or more about what fair means. It's not going to get any easier now that we've added artificial intelligence to the mix.”

Halamka echoed these concerns around health equity and fairness, noting that deploying ethical AI in healthcare requires model developers and other stakeholders to be cognizant of biases, data quality concerns, and other factors that can impact a model’s utility, generalizability, fairness, and predictability over time.

“If we're going to deploy ethical AI for the use of the providers and patients of this country, it has to be not a project, but a process that is ongoing forever — curating the data, developing the models, validating the models, monitoring the models, and doing it for each location where a model might be used,” he stated.

If this process is not ongoing, issues can arise at every step along the continuum, such as biased data, lack of data depth, breadth, and spread, a model generating outputs using flawed inputs, or data shift, Halamka explained.

USING CONSENSUS AS A STARTING POINT

Many of these issues can be addressed using data science or analytics approaches, but there are a variety of approaches to choose from. This can lead to debate about which is best or most appropriate in any given scenario.

Tobey suggested that those interested in applying the blueprint to healthcare AI should focus on areas of widespread consensus, such as the problem of discrimination, which the blueprint itself highlights.

“One key takeaway is that discrimination is a problem in artificial intelligence that we can all agree on and we actually know how to solve. To me, that's huge. We don't all agree on what transparency means, what explainability means, what fairness means. We're probably going to continue debating those for thousands of years. But everyone has very quickly [agreed] that AI can magnify discrimination,” he said.

He explained that AI magnifies the problem of discrimination because it is a backward-looking technology that is trained on information from the past. However, it cannot detect when that information may be leading it to magnify or replicate past discrimination, which can amplify concerns over fairness, bias, and ethics when applied in an industry like healthcare.

Tobey praised the blueprint’s focus on addressing discrimination, stating that this gives health AI stakeholders something actionable to work with and toward. But he also noted that the guidelines are just that, leaving room for interpretation regarding how they may or may not be adopted into policy.

“Focusing on discrimination is a problem that we actually can tackle today in artificial intelligence and really improve the quality of AI and stop harms before they happen, so I feel good about that,” Tobey said. “I think maybe the biggest challenge of what the White House did is that this is not a Bill of Rights; it's a blueprint for a Bill of Rights. And they were appropriately honest and transparent about that, but what it means is we're still going to have to wait for a final statement of what the rights and rules of the road are for another day. Maybe that's good because we don't have [a] consensus yet, but I think innovators are looking for certainty about what the rules of the game are going to be, and the blueprint doesn't get us there. It raises as many questions as answers.”

THE ROLE OF THE BLUEPRINT IN AI POLICY & REGULATION

Many of these questions raised by the blueprint center on what role non-binding guidelines — such as the Blueprint for an AI Bill of Rights and CHAI’s recently published ‘Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare,’ which outlines recommendations for ethical health AI guidelines to support high-quality care and increase AI credibility — may have on healthcare AI policy and regulation moving forward.

“When you read [the blueprint], you can see that they were thinking very hard about healthcare,” Tobey stated. “And one of the clues we have to that is the White House gave five core protections that everyone in America should be entitled to, and the first core AI protection listed was safe and effective AI systems. And anyone who's spent a little bit of time in healthcare knows that safety and efficacy are the two legal mandates of the [US] Food and Drug Administration.”

The White House may have been studying AI and healthcare as an example of what happens when AI meets a real-world sector application, he added. This gives healthcare organizations an opportunity to lead the adoption of ethical considerations like those outlined in the blueprint.

“I don't think it's a coincidence that the No. 1 principle the White House articulated for all sectors happens to be the statutory mandate of the FDA… I think the good news there for AI healthcare practitioners and experts is safety and efficacy aren't just buzzwords. We have a vast body of history and experience on what safety and efficacy mean within the healthcare sector because that's the statutory and regulatory expertise of FDA and those of us who work with FDA very closely,” Tobey explained.

However, he acknowledged that the non-binding nature of the blueprint might slow adoption.

“The Blueprint for the AI Bill of Rights has everything except rights; there's no legal teeth to these suggestions,” Tobey said. “And maybe that's the right approach because we have to agree on general principles before we nail down actual enforceable legal rules. But at the end of the day, we're living in another state of uncertainty.”

But even without regulation, economic and consumer pressure may have a role to play in influencing the adoption of ethical health AI, According to Halamka.

For instance, when someone buys a car, they may read Consumer Reports to evaluate how the car performs in various areas and on safety tests and use that information to make a purchasing decision. The same phenomenon can occur within the health AI arena, creating an “economic model by which the right algorithms go to the right place at the right time,” he explained.

Some of CHAI’s efforts are concerned with creating something akin to a Consumer Report for healthcare AI models.

“What we're doing with [CHAI] is we're taking a lot of the ideas from the blueprint for the AI Bill of Rights and saying, ‘Could you actually create a quantitative analytic for every algorithm to be used in healthcare?’ And then, be very transparent about, ‘does this algorithm work or not? Is it biased or not? Where is there a population for which it's fabulous and a population for which it's lousy?’" Halamka said.

From there, CHAI could potentially create a national registry of the metadata around these algorithms so that a clinician can leverage the EHR to pull down the algorithm that is likely to be most beneficial to the patient in front of them. Under such an AI delivery model, there may not be a binding regulation, but there could be economic incentives built in by regulators or other stakeholders.

“In the meaningful use era, the folks at CMS and HHS said, ‘We actually have no power to regulate who buys what EHR,’" he stated. “It's just not [in] their scope. But they could say, ‘We're going to develop a set of certification criteria and guidelines, and if you want to participate in an incentive program, then you need to buy an EHR that follows those guidelines.’ And so, surprisingly, 95 percent of hospitals and doctors adopted certified EHRs because that was the way they got their incentives.”

Though questions remain about the blueprint's ultimate impact on healthcare AI regulation, the FDA has recently been making moves in the space.

In September, the agency released new guidance recommending that some AI-powered clinical decision support tools, like sepsis prediction devices, should be regulated as medical devices. The move came as the FDA continued to grapple with the rapid growth of AI technology in healthcare, such as machine learning.

Tobey predicted that future moves influenced by the blueprint might be related to data privacy protections and access, notice and explanation when users are interacting with automated systems, the right to opt-out of automated systems in favor of a human alternative, automation bias, and clinicians’ AI literacy because these are already topics that healthcare stakeholders are engaging with and working on.

Regardless of future AI innovations or regulatory guidance, both Tobey and Halamka indicated that the success of ethical AI deployment is a means to achieve the major goals of healthcare, including improving care access, patient outcomes, and health equity.

“If we're not serving those ultimate goals of healthcare, which haven't changed in a thousand years, it doesn't matter what the tool is, we're doing something wrong,” Tobey said.