Healthcare Analytics, Population Health Management, Healthcare Big Data

Analytics in Action News

User Comments, NLP Improve Clinical Decision Support Alerts

Applying natural language processing to user comments could detect clinical decision support alerts that are broken or need improvement.

User comments and NLP could improve clinical decision support alerts

Source: Thinkstock

By Jessica Kent

- Leveraging natural language processing methods to analyze user feedback and override comments in clinical decision support systems could help organizations detect malfunctioning or broken alerts, according to a study published in JAMIA.

Rule-based alerts in clinical decision support (CDS) systems analyze electronic health record (EHR) data and notify providers of potentially dangerous situations or suggest useful clinical actions.

While CDS alerts are used widely in healthcare organizations, the researchers noted that these alerts are vulnerable to malfunctions, which can lead to patient safety risks and alert fatigue.

Users are generally offered the ability to provide feedback on why they decided to override or ignore a CDS alert, researchers explained, but the data is not always incorporated into future CDS optimization processes.

Previous studies have used these comments to evaluate the clinical appropriateness of user overrides, the team said. However, research has yet to focus on user-override comments and their use in detecting alert malfunctions.

READ MORE: Deep Learning Tool Could Offer MRI Clinical Decision Support

The group set out to determine whether free-text override commentary can be used to identify CDS alerts that are malfunctioning or broken. Researchers collected all alerts in their database that had more than ten override comments and manually classified the related alerts into three categories: “broken,” “not broken,” and “not broken but could be improved.”

Of the 120 categorized alerts, 75 were classified as broken or could be improved, which equated to 26.6 percent of all alerts in the EHR.

The team also used three natural language processing (NLP) methods to rank the classified alerts, and examine whether any of the methods brought broken alerts to the top of the list.

The methods included frequency of comments; a “cranky comments” heuristic, in which an algorithm detected words or phrases in comments that seemed to indicate frustration; and a Naïve Bayes classifier, in which an algorithm classified annotated override comments as worthy of investigation or not.

The results showed that frequency of override comments out of total alert instances was not a good indicator of malfunctions. Of the top 20 alerts selected by this method, only eight were true alert malfunctions.

READ MORE: Adherence to Clinical Decision Support Cuts Costs by $1K Per Patient

However, the team found that the other two NLP methods performed well. Of the top 20 alerts selected by the cranky comments heuristic, 16 were true malfunctions. Investigating the top 20 alerts selected by the Naïve Bayes classifier would have yielded 17 true malfunctions.

The findings indicate that reviewing user override comments is a viable way to determine faulty CDS alerts.

“Our results suggest that user feedback provided through override comments has been an underutilized but valuable data source for improving CDS,” the team stated.  

“For organizations that can afford to do so, reading all the override comments may be worthwhile.”

The researchers pointed out that because the comments at their organization were very short, at an average of 25 characters, a daily manual review of all override comments is feasible.

READ MORE: Will Clinical Decision Support, Health IT Cut Diagnostic Errors?

However, for organizations with longer comments, or those with more resources, the team suggested that the two NLP methods could provide a more practical solution for identifying alert malfunctions.

“The specific methods used to analyze override comments can be adapted to suit organizations with different resources and monitoring needs,” the group said.

“A simple heuristic list of words that expressed frustration was effective at identifying broken alerts. This is a low-cost method of finding malfunctions that could be useful even for organizations with relatively few resources to devote to the maintenance of CDS. For organizations with more resources to devote to monitoring CDS, analysis of override reasons can be made more comprehensive and/or sophisticated.”

The study did include some limitations, notably that it did not examine changes in classification over time. When EHR systems get upgraded, or medication and lab codes get revised, it is possible that alerts that were previously broken may begin to function correctly, and vice-versa.

Additionally, researchers did not investigate alerts that received no comments or that did not have comments suggesting a malfunction, which means they could have missed alerts that may be broken.

Despite these limitations, the team is confident that organizations can use and tailor these methods to support their CDS needs.  

“This study develops a novel method for detecting CDS malfunctions and contributes to the growing body of evidence that malfunctions in CDS are common and often go undetected,” the group concluded.

“When it is possible and seems fruitful to do so, we recommend that CDS knowledge management personnel review all override comments on a daily basis, and we have recently deployed this approach at our own organization.”


Join 25,000 of your peers

Register for free to get access to all our articles, webcasts, white papers and exclusive interviews.

Our privacy policy

no, thanks

Continue to site...