Population Health News

UPMC Explores Machine Learning’s Potential to Personalize HIV Testing

A UPMC expert outlines how machine-learning software may help improve laboratory workflows and enhance care for those undergoing HIV testing.

machine learning in HIV care

Source: Getty Images

By Shania Kennedy

- In a recent article published by the American Association for Clinical Chemistry (AACC), Sarah Wheeler, PhD, an associate professor of pathology at the University of Pittsburgh School of Medicine, medical director of clinical chemistry at UPMC Children’s Hospital of Pittsburgh, and medical director of the automated testing laboratory at UPMC Mercy Hospital, discussed how the health system is exploring the use of machine learning (ML) to personalize and improve human immunodeficiency virus (HIV) testing.

The Centers for Disease Control and Prevention (CDC) report that an estimated 1.2 million people in the US have HIV, including approximately 158,500 people unaware of their status. Almost 40 percent of new infections are transmitted by people who aren’t aware they have HIV, making testing a critical part of preventing infection and improving outcomes.

The CDC established in 2006 that Americans between 13 and 64 should get tested for HIV at least once. The agency suggested more frequent testing for those with certain risk factors that make them more prone to infection. Early diagnosis and treatment have been demonstrated to improve patient outcomes, but Wheeler explained that “the approach to using the different types of HIV tests themselves requires expert judgment to serve diverse populations and mitigate the secondary effects of potential false-positive and false-negative results.”

To address this, UPMC researchers developed an ML algorithm that allows personalized workflows for HIV screening results based on their classification as likely true or false positive, improving care for the health system’s low prevalence and high-risk patient populations.

The standard HIV tests routinely used in clinical care are fourth-generation (HIV4G) and fifth-generation (HIV5G) tests. The HIV4G test was the first to detect both HIV-1 and HIV-2 antibodies alongside HIV p24 antigens, which reduced the time between infection and positive screening significantly.

READ MORE: NIH Grant Supports Machine Learning to Improve HIV Patient Outcomes

However, Wheeler noted that HIV4G provides only a single positive/reactive or negative/nonreactive result and requires follow-up testing to differentiate HIV-1 antibodies from HIV-2 antibodies or HIV-1 p24. If the follow-up differentiation test is unclear or negative, nucleic acid amplification testing (NAAT) for HIV-1 is required.

HIV5G, in contrast, provides results for HIV-1 antibodies, HIV-2 antibodies, and HIV-1 p24 antigen, meaning that it has the potential for fewer follow-ups. According to Wheeler, both types of tests are clinically useful, but recognizing which patients or populations to screen with a particular type of test can be a challenge.

It is particularly challenging for a health system like UPMC, which serves both low-prevalence populations, in which one would expect to see increased false positives, and higher prevalence populations, which are likely to present more true positives.

Accurately capturing false and true positives is key for reporting public health data and reducing HIV spread. The researchers leveraged their ML tool to help predict whether a screen was likely a false or a true positive and used that information to determine lab workflows for the specimen.

A particular challenge at UPMC was that many patient samples were reactive when tested by HIV5G but tested negative by NAAT. These samples also often had reactivity for both HIV-1 and HIV-2, low-level reactivity, or were reactive for all analytes, which made classifying results more difficult.

The researchers evaluated 60,587 assays and determined that 453 were reactive by HIV5G, and 127 were negative by HIV5G but also had HIV NAAT performed, indicating that they were true negatives. After excluding 45 of these assays because they had incomplete information, the research team evaluated the remaining 535 cases.

Of these, 142 of 408 reactive results were false positives.

Using HIV-1 antibodies, HIV-2 antibodies, and HIV-1 p24 antigen from the 535 cases, in addition to a random sampling of 25 percent of the presumed negative cases, the ML tool was tasked with classifying each assay.

Overall, the model correctly classified 119 of the 142 prior false positives, demonstrating a false-positive prediction accuracy of 83.8 percent. After refining the model, this accuracy rose to 94 percent, but Wheeler stated that the simpler model with 83.8 percent accuracy is easier to implement in laboratory workflows and significantly improves the personalization of patient diagnosis.

Using this updated workflow, samples classified as likely true positive are immediately released into EMRs with text directing the clinician to the appropriate confirmatory testing.

HIV5G samples classified as likely false positive will be sent for follow-up HIV4G tests. If both are reactive, the original sample result can be released, Wheeler stated.

However, if the second test is negative, the laboratory director can contact the clinician directly to discuss the result, or the sample can be classified as indeterminate, with appended text in the EMR for appropriate follow-up testing.

Technology like this has significant potential to improve care, Wheeler concluded.

“We have more opportunities awaiting us to implement machine learning,” she said. “Instead of using only the manufacturer’s threshold for reactivity, determined in a pretest probability design to assess analytical sensitivity and specificity, we are able to use all of our patient data over 3 years to inform the likelihood of a current patient result being true positive or false positive. This provides the laboratory with new opportunities to personalize workflows and provide clinicians and patients with the answers they need rather than just a number from our instruments.”