Tools & Strategies News

Researchers Highlight Pros and Cons of ChatGPT in Clinical Radiology

In a discussion paper partially written by ChatGPT, researchers explore the potential role that large language models may play in clinical radiology.

ChatGPT in clinical radiology

Source: Getty Images

By Shania Kennedy

- A new opinion piece published in the Journal of the American College of Radiology discusses the potential benefits, pitfalls, and ethical implications of leveraging large language models (LLMs), such as OpenAI’s ChatGPT, in clinical radiology.

The paper’s authors noted that ChatGPT and other artificial intelligence (AI)-powered chatbots have the potential to revolutionize radiology, and the paper investigates how these technologies may be able to serve as assistive tools for radiologists and other medical professionals.

The authors also highlighted that ChatGPT, and similar tools, could support a shift of clinical radiology to a patient-centered model and augment the care provided by radiologists.

To this end, the authors indicated that LLMs present multiple potential benefits, including improved patient education. ChatGPT could be used to provide patients with accessible, simplified imaging reports.

Imaging reports are typically complex, and primary care providers are often tasked with explaining them to patients. A chatbot could use the more complex imaging report to generate a simpler version, which the paper’s authors state could help bolster patient access to medical documents and advance patients’ understanding of their health conditions. Doing so may also improve patient satisfaction and outcomes.

The authors also pointed out that ChatGPT could provide aid to patients prior to radiologic-guided procedures by answering questions and giving details about upcoming procedures, assessing patient readiness before a scheduled procedure, and providing patients with reassurance and support prior to the procedure.

In addition, chatbots could be used to help educate and train radiologists. The paper states that LLMs could be trained on radiology reports and images to generate simulated patient cases, which would then be interpreted by radiology fellows and residents. This, the authors explained, would allow trainees to practice interpreting a wider variety of cases within a simulated setting.

ChatGPT could also be used to generate content and assist with scientific article writing by helping researchers with literature searches, content organization, generation of graphs and tables, style adaptations, and translation into languages other than English.

However, prior to implementing any chatbot in a clinical or research setting, stakeholders must consider the limitations and ethical implications that come with using LLMs in healthcare, the authors stated.

They highlighted multiple concerns around the use of ChatGPT: transparency around how LLMs work; privacy and the protection of patient data; accuracy and reliability challenges; the ethical implications of AI-generated content in research; and ensuring that chatbots do not replace human judgment and expertise.

The paper was also partially written by ChatGPT itself in a move that the authors stated served to showcase the technology’s capabilities.

Other recent research has further called attention to ChatGPT’s potential role in healthcare, including its pitfalls.

Last week, researchers demonstrated that ChatGPT-3 and ChatGPT-4 failed the 2021 and 2022 multiple-choice self-assessment tests for the American College of Gastroenterology (ACG), exams designed to help students judge how well they would perform on the American Board of Internal Medicine (ABIM) Gastroenterology board examination.

ChatGPT-3 and ChatGPT-4 scored 65.1 and 62.4 percent, respectively, on these assessments, which require students to score 70 percent to pass. These findings may indicate limitations in the tools’ utility for medical education in gastroenterology.