Tools & Strategies News

Generative AI May Facilitate Improved Chest X-Ray Interpretation

Northwestern Medicine researchers have developed a generative AI model that can outperform radiologists at interpreting chest X-rays in some cases.

AI in medical imaging

Source: Getty Images

By Shania Kennedy

- A team of Northwestern Medicine researchers has created a generative artificial intelligence (AI) tool capable of interpreting chest radiographs with accuracy on par with or above radiologists for some conditions.

The tool, described in a study published in JAMA Network Open, is designed to assist busy emergency department radiologists and provide guidance for clinicians working in settings with no on-call radiologist.

“We wanted to leverage our clinical expertise as well as our experience building clinically integrated AI tools to tackle this problem using our own institutional data,” said first author of the study Jonathan Huang, a student in the Medical Scientist Training Program (MSTP) at Northwestern, in the press release. “And we built a model that interprets X-rays and complements physician expertise in interpreting medical images by automatically generating text reports from images to help speed clinical workflows and improve efficiency.”

To build the model, the researchers leveraged 900,000 chest X-rays and radiologist reports. The tool was then trained to use these to generate a report for each image, in which it described relevant clinical findings and their significance using the same language and style as a human radiologist.

The research team then tested the model on 500 chest X-rays taken from an emergency department at Northwestern Medicine. The resulting interpretations were compared to those from the radiologists and teleradiologists who first interpreted each image in the clinical setting.

READ MORE: Deep Learning Model Detects Diabetes Using Routine Chest Radiographs

“We wanted to evaluate the AI model’s efficacy in the emergency department setting, which often lacks onsite radiologists who can help advise emergency physicians as they’re seeing patients,” Huang explained. “This is a very real clinical use-case where you could imagine an AI model complementing human decision-making.”

From there, five board-certified emergency medicine physicians were asked to rate each AI-generated report on a scale from one to five, with a rating of five indicating that they agreed with the tool’s interpretation and saw no need for any wording changes on the report.

Through this process, the researchers found that the AI was able to accurately identify X-rays with concerning clinical findings and output high-quality imaging reports. The study also showed no significant difference in accuracy between radiologist- and AI-generated reports across pathologies, including life-threatening conditions like pneumonia.

For detecting any abnormality, the AI’s sensitivity and specificity were 84 and 98 percent relative to the on-site radiologists. For the same task, the original reports from teleradiologists had a sensitivity and specificity of 91 and 97 percent, respectively.

In a handful of cases, the AI tool could identify findings missed by the radiologists, including one instance of a pulmonary infiltrate in an X-ray.

READ MORE: Researchers Highlight Pros and Cons of ChatGPT in Clinical Radiology

The researchers indicated that to the best of their knowledge, this is the first time this type of generative AI model has been used to produce chest X-ray reports.

“If you look at AI tools in radiology, they’ve usually been very single purpose, including ones that we’ve previously developed,” noted senior author Mozziyar Etemadi, MD, PhD, assistant professor of Anesthesiology and Biomedical Engineering at the McCormick School of Engineering. “For example, an AI model that looks at a mammogram and can detect whether or not cancer is present. But in this case, our AI model is telling clinicians everything about an image and giving all the diagnoses, and it can outperform some doctors, basically.”

Moving forward, the research team aims to expand the model to read MRIs, ultrasounds, and CAT scans. They hope that the tool may someday be useful in clinics experiencing workforce shortages.

“We want this to be the radiologist’s best assistant, and hope it takes the drudgery out of their work,” Etemadi said. “We already have started a small pilot study having radiologists evaluate what using this tool in real-time could look like.”

Other institutions are also pursuing use cases for AI tools in medical imaging.

READ MORE: Optimized AI Assists Senior Radiologists in Thyroid Nodule Diagnostics

Last week, a team of researchers from Johns Hopkins Medicine shared that they had developed a machine learning model that can estimate the percent necrosis (PN)—the percentage of a tumor that is considered “dead” and no longer active—in patients with intramedullary osteosarcoma.

Accurately calculating PN following chemotherapy is necessary to gauge how successful the treatment was and estimate a survival prognosis for patients. However, doing so requires pathologists to interpret whole-slide images (WSIs) of bone tissue, a labor-intensive process.

Their model was successful in interpreting and analyzing these WSIs, with some limitations.