Analytics in Action News

Suicide Risk Prediction Models Could Perpetuate Racial Disparities

Two suicide risk prediction models are less accurate for some minority groups, which could exacerbate ethnic and racial disparities.

Suicide risk prediction models could perpetuate racial disparities

Source: Getty Images

By Jessica Kent

- Suicide risk prediction models that perform well in the general population may not be as accurate for Black, American Indian, and Alaska Native people, potentially worsening ethnic and racial disparities.

In a study published in JAMA Psychiatry, researchers from Kaiser Permanente found that two suicide risk prediction models don’t perform as well in these racial and ethnic groups. The team believes this is the first study to examine how the latest statistical methods to assess suicide risk perform when tested specifically in different racial and ethnic groups.

Researchers noted that in the US, more than 47,500 people died from suicide in 2019 – an increase of 33 percent since 1999.

Several leading organizations, including the Veterans Health Administration and HealthPartners, have started using suicide risk prediction models to help guide care. Leaders are looking to leverage records of mental health visits, diagnoses, and other data points to identify patients at high risk of suicide.

"With enthusiasm growing for suicide prediction modeling, we must be sure that such efforts consider health equity," said Yates Coley, PhD, the study's first author and an assistant investigator at Kaiser Permanente Washington Health Research Institute.

READ MORE: At Intermountain, Predictive Analytics Boost COVID-19 Outcomes

"Current methods maximize predictive performance across the entire population, with predictive accuracy in the largest subgroup -- white patients -- eclipsing the performance for less-prevalent racial and ethnic subgroups."

Researchers gathered EHRs for nearly 14 million outpatient mental health visits over a seven-year period from seven healthcare systems. Using these health records, the team developed two different models to predict suicide deaths within 90 days of a mental health visit: A standard logistic regression approach and a random forest machine learning algorithm.

The models each used demographic characteristics, comorbidities, mental health and substance use diagnoses, dispensed psychiatric medications, prior suicide attempts, prior mental health encounters, and responses to Patient Health Questionnaire 9, which is routinely filled out at mental health visits.

The results showed that the models accurately identified suicides and avoided false positives across the entire sample, and for white, Hispanic, and Asian patients. However, the models performed far worse with Black, American Indian, and Alaska Native patients, as well as patients without ethnicity recorded.

For example, the area under the curve (AUC) for the logistic regression model was 0.828 for white patients compared with 0.640 for patients with unrecorded race/ethnicity and 0.599 for American Indian/Alaska Native patients.

READ MORE: Chronic Disease Death Rates Continue to Reflect Racial Disparities

For random forest models, the AUC for white patients was 0.812 compared with 0.676 for patients with unrecorded race/ethnicity and 0.642 for American Indian and Alaska Native patients.

Researchers cited several possible reasons for these differences in prediction accuracy, including embedded biases in the data. Black, American Indian, and Alaska Native face barriers to accessing mental health services, which means there is less data on suicide risk factors so it may be harder to make accurate predictions.

Additionally, the team noted that even when these populations do have access to mental health services, they are less likely to be diagnosed and treated for mental health conditions. The clinical data may not accurately reflect risk, which can impact the models’ suicide predictions.

Finally, suicides in these populations may be incorrectly identified as unintentional or accidental, adding to the challenge in predicting suicides in these populations.

The group pointed out that the two models examined in the study are not the same as the ones now being implemented in health systems. This study evaluated models that predict suicide deaths, while models used in clinical care at Kaiser Permanente predict self-harm or suicide attempts.

READ MORE: Collecting Big Data to Eliminate Rural Health Disparities

Researchers believe that audits like the one used in this study may be needed at other healthcare organizations using suicide prediction models.

"Before we implement any prediction model in our clinics, we must test for disparities in accuracy and think about possible negative consequences," said Gregory Simon, MD, MPH, a study co-author and Kaiser Permanente Washington Health Research Institute senior investigator.

"Some prediction models pass that test and some don't, but we only know by doing research like this."

In healthcare, the potential for racial bias and the exacerbation of disparities is a serious concern with data analytics algorithms. Instead of helping patients, models built with biased or inaccurate information can worsen any existing health inequities.

In 2019, a study identified racial bias in a common algorithm the industry uses to identify eligibility for care management programs.

“There’s growing concern around AI, machine learning, data science, and the risk of automation reinforcing existing biases through the use of algorithms. It was a confluence of what we know is a potential concern,” said Brian Powers, MD, MBA, physician and researcher at Brigham and Women’s Hospital and lead author of the study. 

“There’s absolutely a place for algorithms. What this study showed us is these types of tools are really widespread and have become essentially ubiquitous without enough attention to potential downsides.”

As the use of AI, predictive modeling, and other data analytics technologies continue to rise, the healthcare industry will need to ensure these tools will improve care for every patient population.