Analytics in Action News

Patient Safety, Data Privacy Key for Use of AI-Powered Chatbots

To effectively deploy chatbots powered by artificial intelligence, healthcare leaders will need to ensure these tools support patient safety and data privacy.

Patient safety, data privacy key for use of AI powered chatbots

Source: Thinkstock

By Jessica Kent

- Patient safety, data privacy, and health equity are key considerations for the use of chatbots powered by artificial intelligence in healthcare, according to a viewpoint piece published in JAMA.

With the emergence of COVID-19 and social distancing guidelines, more healthcare systems are exploring and deploying automated chatbots, the authors noted. However, there are several key considerations organizations should keep in mind before implementing these tools.

"We need to recognize that this is relatively new technology and even for the older systems that were in place, the data are limited," said the viewpoint's lead author, John D. McGreevey III, MD, an associate professor of Medicine in the Perelman School of Medicine at the University of Pennsylvania.

"Any efforts also need to realize that much of the data we have comes from research, not widespread clinical implementation. Knowing that, evaluation of these systems must be robust when they enter the clinical space, and those operating them should be nimble enough to adapt quickly to feedback."

The authors outlined 12 different focus areas that leaders should consider when planning to implement a chatbot or conversational agent (CA) in clinical care. For chatbots that use natural language processing, the messages these agents send to patients are extremely significant, as are patient’s reactions to them.

READ MORE: Enhancing Cervical Cancer Screenings with Artificial Intelligence

“It is important to recognize the potential, as noted in the NAM report, that CAs will raise questions of trust and may change patient-clinician relationships. A most basic question is to what extent CAs should extend the capabilities of clinicians (augmented intelligence) or replace them (artificial intelligence),” the authors said.

“Likewise, determining the scope of the authority of CAs requires examination of appropriate clinical scenarios and the latitude for patient engagement.”

The authors considered the example of someone telling a chatbot something as serious as “I want to hurt myself.” In this case, the patient safety element is brought to the forefront, as someone would need to be monitoring the chatbot often.

This hypothetical situation also raises the question of whether patients would take a response from a chatbot seriously, as well as who is responsible if the chatbot fails in its task.

“Even though technologies to determine mood, tone, and intent are becoming more sophisticated, they are not yet universally deployed in CAs nor validated for most populations,” the authors said.

READ MORE: How Artificial Intelligence, Big Data Can Determine COVID-19 Severity

“Moreover, there is no mention of CAs in the US Food and Drug Administration’s (FDA) proposed regulatory framework for AI or machine learning for software as a medical device nor is there a user’s guide for deploying these platforms in clinical settings.”

The authors also noted that regulatory organizations like the FDA should develop frameworks for appropriate classification and oversight of CAs in healthcare. For example, policymakers could classify CAs as low risk versus higher risk.

“Low-risk CAs might be less automated, structured for a specialized task, and have relatively minor consequences if they fail. A CA that guides patients to appointments might be one such example,” the authors wrote.

“In contrast, higher-risk CAs would involve more automation (natural language processing, machine learning), unstructured, open-ended dialogue with patients, and have potentially serious patient consequences in the event of system failure. Examples of higher-risk CAs might be those that advise patients after hospital discharge or offer recommendations to patients about titrating medications.”

Additionally, the authors noted that in partnerships between vendors and healthcare organizations to use CAs, all should be mindful of converging incentives and work to balance these goals with attention to each of the domains.

READ MORE: Data Access, Standards Key for Equitable Artificial Intelligence Use

“Given the potential of CAs to benefit patients and clinicians, continued innovation should be supported. However, hacking of CA systems (as with other medical systems) represents a cybersecurity threat, perhaps allowing individuals with malicious intent to manipulate patient-CA interactions and even offer harmful recommendations, such as quadrupling an anticoagulant dose,” the authors stated.

The authors stated that ultimately, the successful and effective deployment of chatbots in healthcare will depend on the industry’s ability to assess these tools.

“Conversational agents are just beginning in clinical practice settings, with COVID-19 spurring greater interest in this field. The use of CAs may improve health outcomes and lower costs. Researchers and developers, in partnership with patients and clinicians, should rigorously evaluate these programs,” the authors concluded.

“Further consideration and investigation involving CAs and related technologies will be necessary, not only to determine their potential benefits but also to establish transparency, appropriate oversight, and safety.”

Healthcare leaders will need to ensure they continually evaluate the capacity of these tools to improve care delivery.

"It's our belief that the work is not done when the conversational agent is deployed," McGreevey said. "These are going to be increasingly impactful technologies that deserve to be monitored not just before they are launched, but continuously throughout the life cycle of their work with patients."