Features

Exploring the role of AI in healthcare risk stratification

Artificial intelligence is taking the healthcare industry by storm, and its applications in risk stratification have significant potential to improve outcomes.

Source: Getty Images

- In the era of value-based care, preventing or mitigating the impact of adverse patient outcomes before they happen could significantly improve care delivery and reduce costs. However, preventing outcomes requires healthcare stakeholders to possess a vast array of data points and the ability to analyze them effectively.

In recent years, the rise of electronic health records (EHRs) and risk scores has proven invaluable to this process, enabling health systems to begin stratifying their patient populations based on risk.

Risk stratification plays a key role in care coordination and chronic disease management, and the advent of predictive analytics has, in many ways, enhanced these efforts.

As artificial intelligence (AI) continues to make a splash in the healthcare sector, questions about its potential — including in predictive analytics — are being explored by researchers and healthcare organizations globally.

In this primer, HealthITAnalytics will dive into how AI technologies are shaping up to transform risk stratification.

THE VALUE OF RISK SCORING AND STRATIFICATION

The National Association of Community Health Centers (NACHC) conceptualizes risk stratification as “the process of assigning a risk status to patients, then using this information to direct care and improve overall health outcomes.”

NACHC notes that this process aims to segment patients into groups based on complexity and care needs. To do so, healthcare organizations must consider patients on individual and population levels to efficiently identify them and provide the right level of care and services across patient subgroups.

Assigning a patient’s risk category — low, medium or high — at the individual level marks the first step in implementing a personalized care plan. When patients are stratified across these risk categories at the population level, care models can be tailored to the needs of each subgroup.

By utilizing different interventions and care models for at-risk individuals and populations, healthcare stakeholders can improve outcomes, the care experience and health equity, bolstering population health management and value-based care success.

Patient risk scores are critical in flagging patients or populations that may be at risk for adverse outcomes and may benefit from targeted interventions. These scores are developed by identifying risk factors for a given condition. For example, a family history of breast cancer is considered a risk factor for breast cancer.

These risk factors are assessed to better understand how each impacts individual risk or interacts with other relevant factors to increase risk.

This information can be incorporated into predictive analytics-based risk-scoring models to stratify patients and populations. Advanced AI techniques like machine learning are often the basis of these tools, allowing vast amounts of patient data to be analyzed rapidly.

Despite these tools' promise, several obstacles impede their widespread adoption and use.

CHALLENGES AND LIMITATIONS

The rise of AI in healthcare comes with ever-evolving pros and cons. Understanding the potential pitfalls of these technologies and how to address them is critical to ensuring that the tools positively impact care and outcomes.

Much of the excitement around using AI in healthcare comes from the idea that the technology will assist care teams, particularly in clinical decision support (CDS). Risk stratification tools are extremely valuable for CDS, as they can provide a more detailed look into patient risk and help personalize treatment.

However, incorporating AI can exacerbate the challenges of CDS tools, including those that rely on predictive analytics and risk stratification.

In the wake of the Change Healthcare cyberattack earlier this year, data breaches and ransomware threats remain top-of-mind for healthcare stakeholders. The use of AI in the sector presents a host of security and privacy concerns as regulations and industry standards lag behind the technology’s rapid advances.

AI is a major target for bad actors, as healthcare organizations remain focused on deploying strategies to protect themselves against more traditional cyberattacks, and newer technologies often lack established best practices for their safe, secure use early on.

In 2022, the United States Food and Drug Administration (FDA) published guidance recommending that some AI tools — such as those used to generate patient risk predictions — should be regulated as medical devices in line with the agency’s oversight of CDS tools.

Last year, the FDA released further draft guidance proposing a science-based approach for AI- and machine learning-enabled medical devices to be modified and improved quickly in response to new data.

Currently, the FDA is prioritizing collaboration with other agencies to inform future regulations that balance healthcare AI innovation with protecting public health, but it remains to be seen how future regulations may impact privacy and security.

These concerns come alongside questions about the interplay between AI-driven tools and health equity, patient safety and automation bias. Many also point out that the “black box” phenomenon in healthcare AI is a major hurdle to using these tools, as not knowing how an algorithm generates its outputs could undermine patients’ and providers’ trust.

However, the significant positive potential of AI in healthcare is driving researchers and health systems to investigate how to overcome these challenges and improve these tools across various use cases.

USE CASES FOR RISK STRATIFICATION AI

As with any emerging tool in healthcare, defining and testing promising use cases provides a crucial springboard for future advancements and innovations. Risk stratification is theoretically valuable across various specialties and healthcare contexts, and AI may boost its utility even further.

As mentioned above, risk scoring and predictive analytics are top analytics tools for population health management, but risk stratification is also useful for informing patient engagement strategies.

Of course, ascertaining patient risk is useful for guiding care management, but the best care management strategies are of little use if at-risk patients aren’t actively engaged in their care.

By flagging high-risk patients, providers not only have insights to help tailor treatment plans but also to personalize their patient engagement and activation approaches. The risk stratification process allows care teams to better understand a patient’s needs, especially if that stratification reveals that social determinants of health (SDOH) or other non-medical factors are increasing that person’s adverse outcome risk.

This information allows providers to proactively manage risk and connect patients with additional services that are needed.

Risk stratification is also useful in both acute and chronic care.

A research team from the University of California (UC) San Diego School of Medicine writing in the January 2024 issue of npj Digital Medicine demonstrated that a deep learning model implemented in emergency departments (EDs) could accurately predict sepsis and reduce mortality.

The tool, known as COMPOSER, was deployed in the EDs of UC San Diego Medical Center, Hillcrest and Jacobs Medical Center in December 2022. When the researchers compared patient outcomes before and after the model’s deployment, they found that the tool reduced sepsis mortality rates by 17 percent.

Similar research published recently in JAMA Oncology indicated that machine learning tools could accurately forecast an unplanned hospitalization event for patients undergoing concurrent chemoradiotherapy (CRT) using patient-generated health data from wearable devices.

Mount Sinai is actively developing machine learning-based prediction to flag the risk of cardiovascular disease events in sleep apnea patients.

In the realm of behavioral and mental health, many risk stratification efforts are focused on preventing suicide.

Kaiser Permanente researchers published a study this month in JAMA Psychiatry detailing how a machine learning model could help predict suicide attempts among patients scheduled for an intake visit to outpatient mental healthcare.

The researchers indicated that flagging these at-risk patients is both necessary and challenging, as these individuals must be identified early to counteract the fact that many stop mental health treatment after a handful of visits, but the records of those newly seeking treatment are often too sparse for providers to accurately assess risk.

The model combats this by relying on information from intake visits and recorded suicide attempts to forecast the potential for self-harm and suicide in the 90 days following a mental health encounter.

Other use cases for AI-based risk stratification are actively being studied and deployed. The potential for these technologies will likely grow as the tools advance and stakeholders determine ways to effectively address their pitfalls over time.