Tools & Strategies News

Experts Call for More Randomized Controlled Trials of Clinical AI

Experts have found a lack of randomized controlled trials evaluating AI-assisted tools integrated into clinical practice, which may impact how researchers quantify the clinical benefit of these tools.

four paperclips arranged in a plus shape on a gray background. the paperclips, clockwise from the top, are yellow, pink, green, and blue.

Source: Getty Images

By Shania Kennedy

- In a new systematic review published in the Journal of Medical Internet Research, researchers concluded that randomized controlled trials (RCTs) evaluating artificial intelligence (AI)-assisted tools integrated into clinical practice are limited in number and scope and that more of these trials are needed to advance the role of AI in medicine in the future.

There is a growing body of research to support the clinical utility of AI in healthcare, including in chronic disease management, cancer care, and clinical decision support.

However, experts have also raised concerns about its use as its popularity has grown. Some issues arise outside of the technical applications of AI and are related to the data used rather than the algorithms. This is a common challenge for those wishing to apply AI to medical imaging, for example. Other issues stem from the way algorithms are designed, like algorithms that unintentionally perpetuate racial bias.

The issue highlighted in this study belongs to a third category, which is related to clinical readiness and validation of AI models. The researchers here aimed to review all published RCTs of AI-assisted tools to characterize their performance in clinical practice. This information is often used when considering the clinical relevance of these tools, which has the potential to influence the larger role of AI in medicine, according to the study.

The researchers began by collecting relevant RCTs. They searched the CINAHL, Cochrane Central, Embase, MEDLINE, and PubMed databases to identify RCTs comparing the performance of AI-assisted tools with conventional clinical management without AI assistance. They gathered 11,839 articles, but only 39 were ultimately included in their evaluation.

The research team then analyzed the findings of each study to determine their clinical relevance. Overall, the researchers found that AI-assisted tools were implemented in 13 different clinical specialties, with most RCTs being published in gastroenterology.

Most RCTs investigated tools based on patients' biological signals, but some relied strictly on clinical data. In 77 percent of the studies, AI tools outperformed standard care methods, and clinically relevant outcomes improved with the implementation of the AI tool in 70 percent of these studies.

These findings indicate that though there is evidence to support the integration of AI-assisted tools in clinical care, RCTs in this area are limited, the researchers stated. They recommend that more RCTs of AI tools be undertaken to address this gap in the research.

This is one example of how limitations in healthcare IT research can impact clinical care. But health technology itself can also be limited, in turn impacting research.

In a recent interview with HealthITAnalytics, Rosalind Picard, founder and director of the Affective Computing Research Group at the Massachusetts Institute of Technology (MIT) Media Lab, and principal investigator at MIT’s Jameel Clinic, shared how limitations in AI and wearables can negatively impact depression research.