Features

Explaining the Basics of Patient Risk Scores in Healthcare

Patient risk scores and stratification can bolster care management initiatives, but stakeholders must understand the use cases and limitations.

Source: Getty Images

- As medicine advances and healthcare organizations move toward value-based care, providers and health systems are prioritizing population health and preventive care.

But to prevent disease and adverse outcomes for patients, health systems must first identify which populations to focus on.

Doing so involves accurately flagging and assessing at-risk patients to formulate effective prevention strategies and possible treatments. This process requires the implementation of risk scores, which support risk stratification efforts to help reduce adverse patient outcomes.

In this primer, HealthITAnalytics will outline the basics of healthcare risk scores, including what they are, how they’re used, and where they may fall short.

WHAT ARE RISK SCORES?

Risk scores are used to support risk stratification, a process that enables health systems to systematically categorize each patient based on health status and a combination of clinical, behavioral, and social factors.

These risk scores help identify patients or populations that may benefit from targeted screening or follow-up with a provider. Using risk scores, healthcare organizations can stratify populations into low-, medium-, or high-risk categories to better monitor patients’ health and address any medical needs that may arise. 

Risk scores are often developed by first identifying relevant risk factors for a disease or adverse event, such as a family history of diabetes or a history of high blood glucose for diabetes.

From there, researchers can investigate how each factor elevates a person’s risk or how the interaction of multiple factors can increase risk.

That information can then be used to develop risk-scoring models, which use patient data to calculate and stratify risk at the individual or population level.

USE CASES FOR RISK SCORES IN HEALTHCARE

Many health systems have developed their own risk scores for morbidity and mortality, and some are also working to develop polygenic risk scores. These scores quantify the association between genes and the risk for various diseases.

By combining polygenic risk scores with other risk-scoring methods, clinicians can gain clearer insights into patient disease risk rather than relying on traditional or polygenic risk scores alone, according to the Centers for Disease Control and Prevention (CDC).

These scores can also be used to help gauge how a disease will progress or how well a patient is likely to respond to a certain treatment.

Traditional and polygenic risk scores are typically leveraged to support population health, care management, and risk adjustment in healthcare.

Risk stratification is foundational for population health management, as the process helps healthcare organizations anticipate and address patient needs before they arise. It also bolsters risk-stratified care management, which uses patient risk levels to proactively manage populations and resources more effectively.

Population health management and risk stratification are often used in tandem to support value-based care, but risk scoring is crucial for another popular strategy to improve patient outcomes: predictive analytics.

Predictive analytics is a type of advanced statistical modeling used to forecast future health outcomes. Predictive analytics is valuable across various healthcare use cases, from tracking disease prevalence to predicting patient mortality.

Risk scoring can improve predictive analytics by providing healthcare organizations with a granular assessment of patient populations.

Risk scores are also used in risk adjustment, which payers and providers use to project expected healthcare utilization and costs.

To date, risk scores have been developed to identify dementia risk, predict the likelihood of opioid misuse in cancer survivors, quantify genetic risk for heart attack, and flag patients with COVID-19 who may develop critical illness.

While some risk-scoring tools are being further refined and validated by researchers, others are currently being used in the clinical setting.

A pilot program recently launched at Indiana University Health is exploring the use of digital tools to identify patient risk of cognitive impairment and decline.

As part of the project, patients in the primary care setting receive a lifestyle-based questionnaire and digital cognitive assessment. The evaluation leverages artificial intelligence (AI) to detect signs of cognitive impairment and produce a risk score conceptualized like a traffic light, with patients placed into red, yellow, or green categories based on their performance on the assessment.

The tool is designed to capture subtle factors that may contribute to patient risk that don’t show up on traditional screening tests. The pilot’s leadership also indicates that the approach could help detect cognitive decline sooner and improve resource allocation.

LIMITATIONS

Despite the promise of risk scoring and stratification in these use cases, these tools have multiple limitations that must be considered before healthcare organizations can deploy them in the clinical setting.

Johns Hopkins Medicine notes that while risk scores are crucial to understanding a patient’s future health, there is other information beyond these scores that is important to assess when evaluating a patient or population.

Data such as social determinants of health (SDOH) and other factors that may not be adequately captured in clinical settings also play a significant role in the health of both individuals and communities. Risk scores often do not include these considerations, meaning they can miss important insights.

Further, human error can make risk scores less effective. For instance, medical coding errors can result in multiple issues and potential care delays. In the context of risk scores, incorrect coding can lead to wrong information being fed to a risk-scoring model. Thus, the model may not be able to provide an accurate picture of patient risk, which may negatively impact outcomes in the future.

There is also some evidence to suggest that risk scores can perpetuate health disparities. One 2019 study published in Science demonstrated that a popular risk prediction tool prioritized White patients.

The researchers found that the tool gave many Black patients unusually low risk scores compared to their White counterparts, even when their health was significantly deteriorating. For any given risk score produced by the algorithm, the research team found that Black patients were sicker, characterized by signs of uncontrolled illness, than White patients.

The tool’s use of bills and insurance payouts as proxies for disease burden caused this bias. The researchers pointed out that unequal healthcare access for Black patients means their healthcare spending is often lower, making these proxies inaccurate and biased.

To address this, the research team proposed training the tool to predict the number of chronic illnesses that a patient will likely experience in a given year, which reduced disparities in the algorithm by 84 percent.

Polygenic risk scores have also been shown to exacerbate health disparities in the field of precision medicine, according to researchers writing in a 2019 Nature Genetics article.

They noted that these scores are often much more accurate in patients of European ancestry than those of other ancestries. This creates Eurocentric biases in genome-wide association studies, creating a significant gap in the potential utility of polygenic risk scores in the clinical setting, they argued.

To bridge this gap, the researchers stated that genetic studies must prioritize greater diversity with more representative samples of non-European populations. The research team also suggested that summary statistics from the validation of these scores must be available to ensure that disparities do not grow any further for already underserved populations.

Despite the issues with the clinical use of risk scores, work is being done to develop best practices.

In a 2021 opinion piece published in Genome Medicine, experts outlined the ethical, legal, and social issues surrounding polygenic risk scores, such as bias or the relevance of patient test results to their family members.

The authors highlighted that many of these challenges are similar to those that came about with the rise of monogenic testing. While these parallels could help inform best practices for polygenic risk scores, the authors also underscored that new work in the area is needed to ensure these issues are considered and addressed effectively.