Tools & Strategies News

AI Tool Can Detect Signs of Mental Health Decline in Text Messages

University of Washington School of Medicine researchers have found that an artificial intelligence tool can accurately identify “red-flag language” in text messages from patients with serious mental illness.

Various medical and AI tools drawn on a light blue background

Source: Getty Images

By Shania Kennedy

- A study published last month in Psychiatric Services shows that an artificial intelligence (AI) algorithm can detect and classify cognitive distortions, or “red-flag language,” in text message exchanges between patients with serious mental illness and their mental health provider as accurately as clinically trained human raters.

Text messaging is a growing part of mental health evaluation and treatment as telehealth services have increased in recent years. However, text messaging-based interventions can lack some of the emotional reference points and subtle mental health indicators that clinicians use when navigating in-person visits with patients, the press release states.

Mental health providers are facing burnout, like their peers in other medical specialties. But they are tasked with providing high-quality care during a behavioral healthcare provider shortage and the US youth mental health crisis. These strains, the press release notes, can cause undertrained or overworked clinicians to miss cognitive distortions that act as warning signs of mental health decline in their text exchanges with patients.

“When we're meeting with people in person, we have all these different contexts,” said Justin Tauscher, PhD, the paper’s lead author and an acting assistant professor at the University of Washington School of Medicine, in the press release. “We have visual cues, we have auditory cues, things that don't come out in a text message. Those are things we're trained to lean on. The hope here is that technology can provide an extra tool for clinicians to expand the information they lean on to make clinical decisions.”

To develop and evaluate this tool, the researchers examined 7,354 unique, unprompted text messages between 39 patients with serious mental illness and a history of hospitalization and their mental health providers during a randomized controlled trial of a 12-week texting intervention.

Clinically trained human evaluators were asked to grade the texts for cognitive distortions, such as overgeneralizing, catastrophizing or jumping to conclusions, as they typically would in a patient care setting. The researchers then programmed AI algorithms to use natural language processing (NLP) to perform the same task to determine the effectiveness of such a tool for clinical decision support.

“Being able to have systems that can help support clinical decision-making I think is hugely relevant and potentially impactful for those out in the field who sometimes lack access to training, sometimes lack access to supervision or sometimes also are just tired, overworked and burned out and have a hard time staying present in all the interactions they have,” said Tauscher.

Overall, the researchers found that the model performed on-par with the human raters, indicating that the tool may have the potential for use in clinical settings and in the development of scalable, automated clinical decision support tools for message-based mental healthcare.

This study is just one example of how NLP has been used to evaluate mental health.

In June, the Crisis Text Line, a nonprofit organization that provides free, 24/7 text-based mental health support, shared its third annual data report, which leveraged NLP to highlight trends in youth mental health gleaned from conversations between texters and counselors in 2021.