- Speech recognition software that leverages natural language processing (NLP) to create clinical documentation has become increasingly popular among EHR users, but the potential for errors that lead to patient safety issues may be a matter of concern.
A new study published in JAMA Network Open found an error rate of more than 7 percent in randomly selected samples of clinical documentation produced with the help of speech recognition (SR) software.
The error rate represents the number of mistakes per 100 words.
Overall, 96.3 percent of the 217 notes included at least one error directly after dictation and before review by human transcriptionists or physicians themselves.
Fifteen percent of those errors were related to clinical information, and 5.7 percent of those mistakes were clinically significant. Close to two-thirds of documents had at least one clinically significant error, with an average of 2.7 such errors per note.
The largest proportion of errors were related to medications. Discharge summary documents were the most prone to errors, while surgical notes exhibited the fewest data integrity lapses after NLP production.
Even a small proportion of clinically significant errors can be problematic for patient care, says the team of researchers hailing from a number of academic and medical institutions.
Medical errors often result from gaps in communication, many of which can originate in the electronic health record.
“Clinical documentation is essential for communication of a patient’s diagnosis and treatment and for care coordination between clinicians,” the team wrote. “Documentation errors can put patients at significant risk of harm.”
A 2014 analysis of medical malpractice cases found that data entry shortcomings contributed to about 20 percent of malpractice claims that pointed to EHRs as part of the reason for a patient safety event, the study says.
“It is therefore in the best interest of both patients and clinicians that medical documents be accurate, complete, legible, and readily accessible for the purposes of patient safety, health care delivery, billing, audit, and possible litigation proceedings,” said the authors. “As more medical institutions adopt SR software, we need to better understand how it can be used safely and efficiently.”
Review by human transcriptionists is often viewed as an important step for cutting down errors from natural language processing, the article continues.
Errors rates dropped significantly to just 0.4 percent when transcriptionists performed checks of the clinical documentation. However, more of the errors – 26.9 percent – were related to clinical information at this stage, and 8.9 percent were clinically significant.
While the number of overall mistakes dropped again slightly to 0.3 percent once physicians reviewed the documentation prior to signing off on the notes, a high number of clinically significant errors remained.
Even after physician review, 25.9 percent of errors were related to clinical information, and 6.4 percent were significant.
Diagnosis errors replaced medication mistakes as the most common issue after transcriptionist and physician review.
“We also found evidence suggesting some clinicians may not review their notes thoroughly, if they do so at all,” the study pointed out.
“Transcriptionists typically mark portions of the transcription that are unintelligible in the original audio recording with blank spaces (eg, ??__??), which the physician is then expected to fill in. However, we found 16 signed notes (7.4%) that retained these marks, and in 3 instances, the missing word was discovered to be clinically significant.”
The study indicates that healthcare organizations may need to revisit their data integrity and information governance processes, especially if providers rely heavily on natural language processing tools for dictation and EHR documentation.
While NLP tools can be expected to produce some errors due to the evolving state of the technology, organizations should ensure that human review workflows are comprehensive and thorough in order to avoid the potential for mistakes propagating through a patient’s record.
Speech recognition users may also need coaching on how to best phrase key elements of the clinical notes, which types of sounds or words are most likely to be mistranslated, and how to adequately train NLP software to recognize a unique accent or speech pattern.
“These findings indicate a need not only for clinical quality assurance and auditing programs, but also for clinician training and education to raise awareness of these errors and strategies to reduce them,” the authors stressed.
“With the rapid adoption of speech recognition in clinical settings, there is a need for automated methods based on natural language processing for identifying and correcting errors in SR-generated text. Such methods are vital to ensuring the effective use of clinicians’ time and to improving and maintaining documentation quality, all of which can, in turn, increase patient safety.”