Quality & Governance News

Federation of State Medical Boards publishes AI governance best practices

New FSMB guidelines outline how providers can meet their professional and ethical duties when utilizing artificial intelligence in clinical settings.

AI in healthcare

Source: Getty Images

By Shania Kennedy

- The Federation of State Medical Boards (FSMB) published a report outlining best practices for state medical boards to govern the use of clinical AI, including recommendations to reduce potential harm to patients and help providers navigate the technology’s rapidly evolving landscape.

The report and its recommendations — drafted by FSMB’s Ethics and Professionalism Committee and adopted by the FSMB House of Delegates last week — are designed to “aid physicians and state medical boards in navigating the responsible and ethical incorporation of AI centered on education, emphasizing human accountability, ensuring informed consent and data privacy, proactively addressing responsibility and liability concerns, collaborating with experts, and anchoring AI governance in ethical principles.”

The guidelines emphasize that the use of AI in clinical applications requires continuous monitoring and refinement efforts driven by collaboration among providers, regulatory agencies and data scientists. While the report acknowledges that state medical boards cannot regulate AI directly, it notes that state medical boards have explicit authority to regulate clinicians who choose to use these tools.

To that end, one of the report’s key recommendations centers on liability, indicating that clinicians are responsible for their use of AI tools and should be held accountable for any harm caused as a result.

“Once a physician chooses to use AI, they accept responsibility for responding appropriately to the AI’s recommendations,” the report states.

Specifically, the guidelines underscore that a physician’s use of AI must uphold the standard of care.

If a clinician follows an AI’s recommendations, they should be prepared to provide a rationale for doing so, as following the recommendations without a rationale may not be within the standard of care.

However, the guidelines indicate that clinicians should also provide a rationale for their decision-making if they choose to not follow the AI’s recommendation. Because doing so could lead to harmful outcomes for the patient, the report notes, disagreeing with the AI’s recommendation should stem from the good-faith belief that such an action would uphold the standard of care.

In scenarios where providers must contend with black box medical AI — tools whose decision-making is not transparent to users — the FSMB states that these algorithms should not necessarily be avoided, but that clinicians using them “should still be expected to offer a reasonable interpretation of how the AI arrived at a particular output (i.e., recommendation) and why following or ignoring that output meets the standard of care.”

These liability guidelines come as providers and healthcare organizations across the US are grappling to decide how AI should be governed and who should be held responsible in the event that AI use in clinical decision-making leads to adverse patient outcomes.

Stakeholders have recently published a spate of healthcare AI-related guidelines and best practices — including the National Academy of Medicine’s AI Code of Conduct and President Biden’s Executive Order on Trustworthy AI — but many do not discuss liability concerns in-depth.

Organizations like the American Medical Association (AMA) maintain that holding physicians liable for the use of AI-enabled tools in clinical care poses risks to the successful integration of these technologies.

Alongside its stance on AI-related liability, the FSMB report recommends that providers should be required to be transparent about their AI use and that state medical boards should create clear guidance for licensees around the disclosure of AI usage to patients.

“These guidelines are some of the first that clearly outline steps physicians can take to meet their ethical and professional duties when using AI to assist in the delivery of care,” said Humayun Chaudhry, DO, MACP, President and CEO of FSMB, in a press release. “We hope that this policy will reduce the risk of harm to patients and guide physicians by providing recommendations for state medical boards on how to promote safe and effective incorporation of AI into medical practice in a manner that prioritizes patient wellbeing.”