Analytics in Action News

How Social Determinants Data Can Enhance Machine Learning Tools

Regenstrief researchers incorporated social determinants of health data into a machine learning algorithm to improve its performance.

How social determinants data can enhance machine learning tools

Source: Getty Images

By Jessica Kent

- In order to provide holistic, quality care for patients, clinicians have to address the non-clinical factors that influence overall wellness – individuals’ social determinants of health.

While it is widely understood that these determinants have a significant impact on physical health, providers don’t always have the means to focus on patients’ social characteristics.

“When you look at how clinical care has been delivered up until this point, you see a lot of focus on treating patients in primary care settings,” Suranga Kasthurirathne, PhD, Regenstrief Institute research scientist and assistant professor of pediatrics at IU School of Medicine, told HealthITAnalytics.

“Unfortunately, what is this doesn't account for is all the things that influence or impact us outside of the doctor's office. So many non-clinical factors influence our health and how we respond to adverse situations. But primary care physicians don't always have the time or resources to identify these needs and point patients to the right place.”

Suranga Kasthurirathne, PhD Source: Xtelligent Healthcare Media

To help providers identify primary care patients with social risks, researchers at Regenstrief Institute and Indiana University turned to machine learning technology. The team developed Uppstroms, a machine learning application that identifies patients who may need referrals to wraparound services.

READ MORE: Collaboration Will Offer Data to Train Machine Learning Tools

To further improve the performance of the Uppstroms app, researchers recently incorporated additional social determinants of health data into the algorithm, including insurance, medication history, and behavioral health history.

“When patients come in for a primary care appointment, the model will pull all of the data available on their clinical background, demographics, and social factors, and use it to identify additional services they may need,” said Kasthurirathne.

“Maybe you’re at high risk for depression, or you recently became unemployed. Maybe you're suffering from abuse. With the Uppstroms app, providers can point patients to additional services that will help take care of these non-clinical needs.”

With the added data elements, the new decision models have outperformed previous models. Researchers have also found that the algorithm has led to patients adhering to provider referrals.

“After our solution was introduced, it led to an increase in referrals as well as an increase in patients actually turning up for appointments. That's very important because in many situations, providers make referrals and then patients don't show up. That's a huge waste of costs,” said Kasthurirathne.

READ MORE: Addressing the Social Determinants of Health with AI, Partnerships

“In our case, this tool has helped ensure that patients will turn up to these referrals. And both of those factors are very valuable to healthcare systems and patients alike.”

According to Kasthurirathne, the emergence of advanced analytics tools and increased access to data has allowed the healthcare industry to start to address individuals’ social determinants.

“We’ve known for a long, long time that an individual is impacted by both clinical factors and social risks. However, we couldn't really act on this knowledge until the advent of computing resources and increased accessibility to large amounts of data,” he noted.

“So, access to machine learning tools and patient information has really helped us start to address the social determinants of health. But it's only in recent times that we've been able to apply and use these materials.”

 For developers and researchers looking to design healthcare artificial intelligence tools, continuous improvements and refinements are a critical part of the process.

READ MORE: Social Determinants of Health Impact Cancer Screening Rates

“You’ll see lots of work where somebody builds an AI model, they evaluate it and they walk away. But AI development has to be an ongoing task, because the availability of data sources is continuously changing,” Kasthurirathne said.

“It's critical to monitor innovations in the AI field, and continuously and systematically improve your AI models. We had a good product in the past. What we've done here is take those newer advancements and build a stronger, more robust product.”

Going forward, researchers are looking to develop better approaches to working with unstructured data.

“The current work that we've done is all structured data, but a lot of clinical value is present in unstructured data – free text reports, doctor's notes, lab tests, other semi-structured documents,” said Kasthurirathne.  

“While there have been lots of advances in natural language processing and the ability to work with unstructured data, it's very challenging to operationalize that knowledge for real-time analytics. One of our next steps is to basically develop better mechanisms to work with that unstructured data and to get to a place where we can apply them in real time.”

Kasthurirathne and his team are also aiming to eliminate implicit or explicit bias in their algorithms.

“We’re seeking to develop better approaches to continuously monitor and evaluate our AI solutions. This includes keeping track of the machine learning model's performance, as well as evaluating how free from bias the models are,” he stated.

“There have been significant concerns in the media regarding AI tools that present certain levels of biases across populations, and that's a huge limitation that we need to address. We need to come up with a robust method to evaluate these models incrementally and make sure that they are performing in a fair manner.”

To ensure analytics tools will effectively improve care, Kasthurirathne emphasized that developers and researchers need to consider not only algorithm development, but also how the tool will fit into clinical workflows.

“We'll see lots of AI models that are measured based on just predictive performance. Someone builds a model and then report sensitivity, specificity, precision, accuracy, and that would be the only measure of how successful the model is. That might work in other domains, but in healthcare it's a lot more challenging, especially because we need to build models that work with an existing clinical decision-making process,” said Kasthurirathne.

“It’s about understanding that you need to build a model that optimizes for a performance measure that clinicians care about, that it fits in with the clinical workflow, and that clinicians will accept and continue to use your tool.”