Features

How Will Biden’s Executive Order on Trustworthy AI Impact Healthcare?

Legal experts detail how Biden’s executive order on safe, secure, and trustworthy artificial intelligence will impact AI regulation in healthcare.

Source: Getty Images

- In October, President Joe Biden signed the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI). The order establishes guardrails focused on promoting safety and security, protecting Americans’ privacy, advancing civil rights and equity, standing up for workers and consumers, promoting innovation, and advancing American leadership in the AI space.

These standards apply across various industries in the United States, including healthcare, and expand on existing safety protections laid out by other frameworks — like the White House’s Blueprint for an AI Bill of Rights.

Jason Schultz, healthcare partner at Barnes & Thornburg LLP, and Bryant Godfrey, partner and co-chair of Healthcare & FDA Practice Groups at Foley Hoag LLP, spoke with HealthITAnalytics to discuss the implications of the executive order on healthcare as AI continues to advance in a regulatory gray area for the industry.

THE EXECUTIVE ORDER’S DIRECTIVES

“The biggest challenge that the executive order seeks to address is the extremely rapid development of AI without any additional assessment of the repercussions,” Schultz explained, noting that it provides a framework for creating standards, laws, and regulations around the technology across industries. The executive order also establishes a roadmap of subsequent actions that government agencies must take over the next several months to create that legal framework.

Godfrey indicated that the executive order helps create uniform standards for AI development. He underscored that the variability in software types and applications has made regulating these technologies challenging, particularly in healthcare.

“The executive order is a first step in getting the federal administrative agencies to start thinking about [how] to regulate these technologies in a manner that makes sense for the user – ultimately to the benefit of the end user if the AI is for patient care,” he explained.

The order directs government agencies to start forming task forces and strategic plans to establish a foundation for smart AI regulation. Additionally, Schultz noted that the executive order requires these agencies to share information and collaborate to develop strict standards to protect national security, the economy, and public health.

“The Executive Order has the potential to slow the development of artificial intelligence, but it does so to properly assess the risk, test the technology, and release AI with the appropriate safeguards in place,” he said.

“[The order] establishes a level playing field. It sets forth the standards, tools, and tests that all AI will be subject to prior to its release,” he continued.

Godfrey indicated that the executive order sets the stage for how agencies approach AI regulation.

“For instance, [the Food and Drug Administration (FDA)] is primarily looking at products, whereas other agencies — like [the US Department of Health and Human Services (HHS)] — are not product specific. They are more healthcare services [and] delivery in nature, so the recommendations might look different depending on what the AI is for.”

The executive order directs HHS to “prioritize grantmaking and other awards, as well as undertake related efforts, to support responsible AI development and use” by collaborating with the private sector. The order also instructs the HHS to publish a plan to address the implementation and use of algorithmic systems to ensure that they are contributing to equitable outcomes and establish an AI Task Force to create a strategic plan to guide the responsible deployment of AI in the health and human services sectors, among other objectives.

HHS has already begun some of this work with the recent release of the Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) final rule. The rule establishes transparency requirements for HHS-certified decision support software that will apply by the end of 2024.

While the executive order and rules like HTI-1 fill gaps in current healthcare AI regulations, they are not without their limitations and pitfalls.

LIMITATIONS AND THE REGULATORY PROCESS

Schultz and Godfrey emphasized that the executive order currently lacks a strong enforcement mechanism and instead acts as a guideline for federal agencies to align with the White House’s expectations around AI regulation.

“This executive order is the first step in a long staircase,” Schultz stated. “It falls short in many areas, but I think it attempts to at least address those by assigning different obligations to different government agencies, [directing them] to at least start preparing reports, looking at the research, and eventually, creating regulations to govern the development of AI.”

He highlighted concerns that the order may not be able to slow down the rapid development of AI that is already underway. Schultz further explained that the executive order is “confused,” posing questions around the following topics:

  • Whether the order can slow the development of AI
  • How AI may evolve if it isn’t already doing so
  • Whether AI tools are actively learning and modifying themselves based on new data

 “From that perspective, I question whether the government has a misunderstanding of AI already,” he said.

He also underscored other potential pitfalls of the executive order, such as AI developers relocating outside the US to avoid legal frameworks established by its directives. Funding is also a significant concern, as securing the appropriate funding for the executive order’s initiatives may prove challenging. Preventing technology misuse or theft may also prove difficult without proper funding or enforcement.

The agencies that received directives from the executive order are expected to create reports and standards within a relatively short time, most within nine months or less. Schultz questioned whether agencies could comply with the executive order within those narrow time frames, citing a lack of resources that some agencies already experience.

“In some ways, [the order] feels like an arms race or a race to the moon. A lot needs to be completed in a very short amount of time. So, [the White House is] throwing deadlines out there without knowing if they can actually be met,” Schultz said.

He further indicated that the executive order may fall short in terms of its impact on how government agencies define AI. He explained that the order encourages the government agencies involved to collaborate and define the term “artificial intelligence” for regulatory purposes, which is a positive step, but the evolving nature of AI technologies may negatively impact the validity and usefulness of such a definition in the future.

Another major limitation of the executive order is that it does not account for how states and the federal government interact. Schultz noted that much of healthcare regulation occurs at the state level, making regulating health AI across state lines complex.

“How do state licensing laws that apply to medicine interact and cause different environments for AI development over the entire country, and how does that impact health systems and their willingness to want to expend resources on AI development if it's not something that could even be used in their state yet?”

Despite its limitations, the executive order has an important role to play in health AI regulation.

“Where the executive order falls in the regulatory process is initiating, or at least putting some sort of accountability on administrative agencies to come up with responsible regulations,” Godfrey said. “Also, because [the order] is out there, it also puts the public on notice that it's out there, so the public can hold agencies accountable.”

He explained that interested parties can, for example, petition the FDA if they feel the agency is taking too long on an initiative. If agencies are not progressing quickly enough on the executive order’s directives, citizen petitions may be filed to spur action.

The executive order helps open that dialogue with the public and industry, which can help streamline the process.

“The government relies on industry to give them that inside look into what [the issues are],” Godfrey stated. “Where are they in terms of the development? What are some of the challenges that they're running into and that they would like to see uniformity with? Hearing from industries [can] help inform any policies or regulations before just putting pen to paper.”

How this open dialogue will drive action on the executive order’s directives is largely yet to be seen, but it is likely to evolve as AI continues to advance.

MOVING FORWARD

AI is rapidly developing across all industries — not just healthcare — raising concerns that the executive order may become outdated in some ways, like some other responsible healthcare AI frameworks that have come before it.

Schultz pointed out that with the November 2022 release of ChatGPT, there was a drastic shift in the public’s awareness of AI and how industry stakeholders approached it. Now, the AI landscape is markedly different in many ways, and there’s no way to predict whether another similar shift might occur in the future.

This change is reflected in the language of some AI frameworks, such as the White House Blueprint for an AI Bill of Rights. Schultz indicated that the Blueprint, also published last year, was written with a more skeptical, cautious tone, while the executive order’s language is more positive. While still mindful, the executive order’s approach acknowledges that AI is developing rapidly and must be regulated while promoting a more open, competitive AI ecosystem in the US.

“[There is] this shifting, evolving view that's already occurred in just one year since the Blueprint for an AI Bill of Rights, and that's why I wonder what shift we'll see over the next year as these new reports come out from each of the different agencies,” Schultz said.

But even as frameworks become outdated or technology changes significantly, Godfrey explained that AI regulation will benefit from engaging various stakeholders.

“When entering into a new frontier from a regulatory standpoint, it's just one of those things where the government might think something, go back to the starting board based upon experience, and then ultimately, [need] to pull in more viewpoints and more experience,” he noted.

“It's one of the things where having the right folks and viewpoints at the table and knowing who to involve [is vital],” he continued. “It's critical to regulating something that's as versatile as AI.”

Moving forward, as agencies work to comply with the executive order, the biggest challenge for the federal government will be balancing the need to get healthcare AI right without stifling innovation.

“The executive order acknowledges what may not have already been apparent back in 2022, which is the fact that AI is already here, it's already being developed without many standards, it needs regulation, and that the US needs to act immediately and swiftly to impact AI development or else we're going to be left behind,” Schultz stated.