Analytics in Action News

Providers Seen as Less Liable for Following AI Recommendations

Potential jurors may believe that physicians who follow artificial intelligence recommendations are less liable for medical malpractice.

Providers perceived as less liable for following AI recommendations

Source: Thinkstock

By Jessica Kent

- Potential jurors may not be strongly opposed to providers’ acceptance of artificial intelligence medical recommendations, indicating that clinicians may be less liable for medical malpractice than commonly believed, a study published in the Journal of Nuclear Medicine revealed.

Clinical decision support tools increasingly rely on artificial intelligence algorithms for diagnosis and treatment recommendations, researchers noted. These personalized recommendations can deviate from standard care, potentially making providers vulnerable to increased liability in medical malpractice.

“New AI tools can assist physicians in treatment recommendations and diagnostics, including the interpretation of medical images,” said Kevin Tobia, JD, PhD, assistant professor of law at the Georgetown University Law Center, in Washington, DC.

“But if physicians rely on AI tools and things go wrong, how likely is a juror to find them legally liable? Many such cases would never reach a jury, but for one that did, the answer depends on the views and testimony of medical experts and the decision making of lay juries. Our study is the first to focus on that last aspect, studying potential jurors’ attitudes about physicians who use AI.”

Researchers conducted an online study of a representative sample of 2,000 adults in the US. Each participant read one of four scenarios in which an AI algorithm provided a drug dosage recommendation to a physician.

READ MORE: Artificial Intelligence Falls Short in Detecting Diabetic Eye Disease

The scenarios varied the AI recommendation (standard or nonstandard drug dosage) and the physician’s decision to either accept or reject the AI recommendation. In all scenarios, the physician’s decision caused subsequent harm to the patients.

Study participants then evaluated the physician’s decision by assessing whether the treatment decision was one that could have been made by most physicians and a reasonable physician in similar circumstances. Higher scores indicated greater agreement and therefore, lower liability.

The results showed that participants used two different factors to evaluate physicians’ utilization of medical AI systems. The first was whether the treatment provided was standard, while the second was whether the physician followed the AI recommendation.

Participants judged physicians who accepted a standard AI recommendation more favorably than those who rejected it. However, if a physician received a nonstandard AI recommendation, he or she was not judged as safer from liability by rejecting it.

These findings demonstrate that physicians who follow AI advice may be considered less liable for medical malpractice than commonly thought, researchers said.

READ MORE: Top Challenges of Applying Artificial Intelligence to Medical Imaging

While prior studies suggests that laypersons are very averse to AI, this study shows that they are not strongly opposed to a physician’s acceptance of AI medical recommendations. This finding suggests that the threat of a physician’s legal liability for accepting AI recommendations may be smaller than previously believed.

The team expects that these results could increase the use of AI in healthcare.

“The findings speak to recent concerns about legal impediments to the use of AI precision medicine. Tort law may not impose as great a barrier to the uptake of AI medical system recommendations as is commonly assumed; in fact, it might even encourage the uptake of AI recommendations,” researchers said.

The research may also offer critical insight into patients’ attitudes toward physician AI use. While many believe that patients and laypeople are reluctant to trust AI tools’ recommendations, previous studies show the opposite: A 2018 survey conducted by Accenture showed that around 50 percent of patients would be willing to trust an AI nurse or physician with diagnoses, treatment decisions, and other direct patient care tasks.

Researchers on the current Journal of Nuclear Medicine study say their findings reinforce these attitudes toward AI.

READ MORE: Over 80% of Health Execs Have Artificial Intelligence Plans in Place

“The study most directly examines laypeople as potential jurors, but it also sheds light on laypeople as potential patients. Important recent work in psychology shows that laypeople are algorithm-averse in other forecasting contexts, particularly when they see algorithms err,” researchers noted.

“But this study’s results suggest that laypeople are not as strongly averse to physicians’ acceptance of precision medicine recommendations from an AI tool, even when the AI errs.”

The study shows that while liability may be a chief obstacle to physician AI use, outsiders are not strongly opposed to providers’ adherence to AI recommendations.

“These results provide guidance to physicians who seek to reduce liability, as well as a response to recent concerns that the risk of liability in tort law may slow the use of AI in precision medicine,” the team concluded.

“Contrary to the predictions of those legal theories, the experiments suggest that the view of the jury pool is surprisingly favorable to the use of AI in precision medicine.”