Healthcare Analytics, Population Health Management, Healthcare Big Data

Tools & Strategies News

Can Healthcare Avoid “Black Box” Artificial Intelligence Tools?

Artificial intelligence tools are complex by nature, but developers in healthcare should strive to deliver as much data transparency as possible.

Artificial intelligence and machine learning in healthcare

Source: Thinkstock

By Jennifer Bresnick

- Artificial intelligence is taking the healthcare industry by storm as researchers share breakthrough after breakthrough and vendors quickly commercialize advanced algorithms offering clinical decision support or financial and operational aid.

Terms like machine learning, deep learning, neural networks, random forests, and unsupervised learning are becoming part of the everyday lingo for analytics enthusiasts, but even experts in big data can sometimes feel left in the dark when trying to figure out exactly how these new tools come to their conclusions.

“Black box” software is nothing new, in healthcare or elsewhere.  Most users implicitly trust the results of their various tools and systems without knowing exactly how “input A” gets translated into “output B.”

Without extensive training in software design and development, data science, or engineering, it is impossible for the average consumer to understand the intricate inner workings of the applications, machines, and devices that now form so much of the digital substrata upon which we all depend. 

And in most cases, this intimate knowledge isn’t necessary. 

READ MORE: Navigating the Hype of Healthcare Artificial Intelligence Companies

But when it comes to clinical decision making, which is bound just as much by moral imperatives as it is by liability laws, it is critical for providers and patients to have as much transparency as possible - especially if a computer is helping to make recommendations about diagnoses or treatment protocols.

Even if it is difficult for users to understand all the nuances of how a particular algorithm functions, healthcare professionals must be able to independently review the clinical basis for recommendations generated by AI or other machine learning tools, stresses the Clinical Decision Support Coalition in its recent voluntary guidelines for developers. 

What can be explained should be explained, the Coalition says, and machine learning tools should do their very best to share how confident they are that a particular recommendation or association is trustworthy.

“[Providing a clinical rationale for a decision] goes beyond merely identifying the source of the clinical rules, and includes a reasonable explanation of the clinical logic by which the software arrived at its specific recommendation based on patient specific information,” the guidance states.

“This clinical rationale will vary greatly depending, for example, on whether the software is assisting with a diagnostic or therapeutic decision. The software communicates the clinical thought process behind the recommendation, not necessarily the computer science functionality used.”

READ MORE: How Healthcare Can Prep for Artificial Intelligence, Machine Learning

This approach may be particularly important for tools that do not fall under the FDA’s regulatory purview and can be marketed without the same level of vetting.

In December of 2017, the FDA released guidance on clinical decision support tools explaining that the agency is focusing its efforts on higher risk systems with the potential to negatively impact an individual. 

Low-risk tools, like patient-facing reminder apps for chronic disease management that may rely on algorithms to generate alerts, are not currently under FDA review.

But clinical users and their patients may be unclear about the limitations of these products, and vendors of products that do not meet the FDA's risk thresholds may not be required to be quite as scrupulously honest about their boundaries as providers might like.   

Regulators like the FDA will have some tough choices to make about risk and reward as AI continues to advance.  While the 21st Century Cures Act provides some parameters for understanding how to classify clinical decision support systems, the constantly changing environment of analytics is an ongoing challenge.

READ MORE: Top 4 Machine Learning Use Cases for Healthcare Providers

“Regulators would not be eager to risk an incorrect computer decision harming a patient when no one would be able to explain how the computer made its decision—or how to prevent a repeat of the situation,” noted a recent report from McKinsey Global Institute.

And “how much patients would trust AI tools and be willing to believe an AI diagnosis or follow an AI treatment plan remains unresolved,” the brief added. 

These concerns have contributed to relatively low adoption rates in healthcare – and they are compounded by a general lack of faith in the industry’s big data analytics abilities.  Data siloes are regrettably common in healthcare, and few organizations have yet solved the problem of aggregating and normalizing their data assets in a way that allows robust and seamless analysis.

In order for AI to flourish, users must be confident not only that the algorithm is based on sound clinical guidelines, but that the data underpinning these tools is accurate, timely, trustworthy, and reliable.

“If in fact there is little data to support the accuracy of the software, the software must clearly state that,” the CDS Coalition said.

“The software should provide a thorough explanation of the data sets used to feed and test the machine to provide important context and assurance to the clinician. The discussion should include any limitations or potential biases in the methods used to gather the data.”

Establishing a high level of trust in the data used to fuel machine learning tools will be essential for ensuring that the resulting care quality is high and patients are receiving the help they need to manage chronic conditions or acute illnesses.

“We are increasingly using very complicated algorithms and cutting edge artificial intelligence to predict and guide health care, such as recommending a certain dose of insulin to a diabetic patient,” says I. Glenn Cohen, Faculty Director at the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School.

“Some of these new techniques help us innovate and push the boundaries of medicine, but others may be based on errors and lead to sub-optimal care.”

The Petrie-Flom Center is collaborating with the Center for Advanced Studies in Biomedical Innovation Law (CeBIL) at the University of Copenhagen to explore the legal and ethical implications of “black box” artificial intelligence tools used for precision medicine.

The Project on Precision Medicine, Artificial Intelligence, and the Law (PMAIL) will identify policy gaps in the US and Europe while examining the scientific and legal challenges of protecting patients without stymying innovation.

“Artificial intelligence and machine learning in medicine create tremendous possibilities of transforming health care for the better,” said Professor Nicholson Price, a project collaborator and past Petrie-Flom Center Academic Fellow. 

“But it’s so different from traditional medical technology that we need new tools to understand how it should be developed, regulated, and deployed in care settings. PMAIL aims to tackle exactly those issues and help develop those tools.”

The Office of the National Coordinator is also seeking to ensure that artificial intelligence developers embrace transparency and reliability as their algorithms increase in sophistication.

“The maxim ‘do no harm’ can perhaps best be upheld by the development of processes and policies to ensure the transparency and reproducibility of AI methods and results,” said a team of officials from the ONC, Robert Wood Johnson Foundation, and AHRQ in a blog post in January.

The blog post introduced an ONC-commissioned report that tries to balance the excitement and potential of artificial intelligence with the myriad challenges of making AI a force for good in healthcare.

Once again, trust and transparency are at the root of the problem – especially when AI tools are marketed directly to consumers, bypassing a clinically educated provider altogether.

“There is enormous money to be made in the inevitable onset of internet-delivered diagnostics and care,” points out JASON, the organization tapped to develop the report. “This will promote the entry of all sorts of companies into this space, both meritorious and not.”

While many applications and services are legitimate in their goals and methodologies, “we could imagine a scam service asking patients to submit self-taken skin mole images along with payment for an automated ‘quack’ diagnosis in return, one that did not actually use any validated classification scheme,” says the report.

“More likely, the methods used by any one company may be hidden or obscure, meaning the user has no way to judge the soundness of the company.”

“There is potential for the proliferation of misinformation that could cause harm or impede the adoption of AI applications for health. Websites, apps, and companies have already emerged that appear questionable based on information available,” JASON warns.

Preventing the proliferation of “snake oil” offerings and combatting the potential pitfalls of “black box” tools and algorithms will require a concerted effort across the healthcare industry.

Regulators will need to be firm and thorough with developers or vendors who bring new products to market, and will need to move as quickly as possible to keep up with the breakneck pace of innovation and change in the world of AI.

Healthcare providers must also play a significant role in keeping vendors accountable, especially for the quality of products that do not fall under the jurisdiction of the FDA or other regulatory bodies.

Providers should avoid getting caught up in the hype surrounding these new products – especially those that have not yet been validated in real-world situations – and evaluate them shrewdly before relying upon them for any type of patient care.

They should also work to educate patients about consumer-facing applications or tools that purport to leverage machine learning or artificial intelligence and have a list of reliable alternatives ready to share in case the product does end up falling short of expectations.

“This is an emerging area, and we recognize that many people are studiously working to figure out a way to make machine learning software less of a black box,” the CDS Coalition acknowledges.

It is highly unlikely – and almost certainly not necessary – that all clinicians will become experts in the development and deployment of complex machine learning algorithms. 

But making it a point to give clinical decision-makers a solid understanding of how recommendations are made, what data supports the suggestions, and the degree of confidence supporting the recommendation will help providers use artificial intelligence tools to their fullest without running afoul of liability laws, regulations, or negative patient opinions.

As machine learning works its way ever deeper into the health IT ecosystem, ensuring transparency and trust will no doubt remain one of the most important tasks within an industry full to bursting with the potential to dramatically alter the delivery of quality clinical care. 

X

Join 25,000 of your peers

Register for free to get access to all our articles, webcasts, white papers and exclusive interviews.

Our privacy policy


no, thanks

Continue to site...