Healthcare Analytics, Population Health Management, Healthcare Big Data

Analytics in Action News

Predictive Analytics, EHR Data Identify Appointment No-Shows

A predictive analytics model using EHR data at the clinic level helped a health system identify nearly 5000 patient appointment no-shows.

Predictive analytics and EHR data can identify patient no-shows

Source: Thinkstock

By Jessica Kent

- Using EHR data, organizations may be able to create predictive analytics models that accurately identify the risk of a patient appointment no-show, according to a new study published in JAMIA.

Researchers from Duke University were able to capture 4819 previously unidentified patient no-shows within the Duke health system, which may allow care sites to operate more efficiently and maximize the use of clinician hours.

Predictive analytics is a key use of electronic health record (EHR) data, the research team noted.  

Patient no-shows are associated with substantial reimbursement losses and can create significant administrative and workflow burdens. Using information from the EHR, organizations can create predictive models to determine the likelihood that a patient will experience a certain outcome, including patient no-shows and late cancellations.

The researchers set out to determine whether patient no-show risk models perform best when using data collected across wider patient populations or from patient sub-populations. They compared the accuracy of models built at the overall health system, specialty, or individual clinic levels.

The team used EHR data from a health system comprising 55 clinics with 14 specialties to build the models. The group measured the accuracy of predictions based on the models’ discrimination, or ability to separate patients into risk groups, and calibration, or ability to align predicted outcomes with actual outcomes.

Building risk models from smaller, more specific patient populations could increase the accuracy of these predictions, the group stated.  The clinic-specific models surpassed those at the system and specialty levels when it came to their discrimination and calibration.

In terms of discrimination, the clinic-specific model outperformed both the system and specialty models 67 percent of the time.

For calibration, the clinic-specific model performed better than the system and specialty models 57 percent of the time.

The team also evaluated the impact of model choice on decision-making. The model built on overall health system data had a sensitivity of 62 percent, which is associated with capturing an additional 4160 patient no-shows per year.

In comparison, the clinic-specific models demonstrated a sensitivity of 69 percent, which corresponds to catching an additional 4819 patient no-shows per year.

The results of the study demonstrate that designing risk models with clinic-specific data can result in the detection of more patient no-shows, which can help reduce administrative burden and improve clinician workflow.

The team also noted that developing risk models on the clinic level can help organizations tailor outcomes to their specific environment. For example, if different specialties within a health system have different definitions for what constitutes a late cancellation, they can modify their risk models to reflect that.

“For some specialties a late cancellation was the day before the appointment,” the researchers said.  

“However, other specialties would ideally define late cancellation as 3 to 5 days before the appointment. By developing clinic specific models, each clinic could define the outcome that makes the most sense for that particular clinic.”

In addition to demonstrating the benefits of local-level prediction models, the study also shows the potential for EHR systems to leverage large amounts of data and develop more accurate risk models.

“Many EHR vendors are offering prediction models out of the box without requiring or recommending clients to do any retraining with local data,” the researchers stated.   

“The heterogeneity seen within our dataset suggests that local validation and recalibration would lead to better performance and should be part of the implementation of these off-the-shelf models.”

The researchers point out that their study is limited in that they only collected data from a single health system, and it is difficult to determine whether the models could be transferred to other institutions.

However, the group believes the study shows the value of developing predictive models at lower levels to better capture patient no-shows and improve clinician workflows.


Join 25,000 of your peers

Register for free to get access to all our articles, webcasts, white papers and exclusive interviews.

Our privacy policy

no, thanks

Continue to site...