Population Health News

NIH funding development of AI tools for health disparity research

The “Trustworthy AI to Address Health Disparities in Under-resourced Communities,” project is set to enhance the explainability of AI-driven risk prediction models.

health equity risk prediction AI

Source: Getty Images

By Shania Kennedy

- George Washington University (GW) School of Medicine and Health Sciences (SMHS) and the University of Maryland Eastern Shore (UMES) have been awarded a two-year, $839,000 National Institutes of Health (NIH) grant to advance the development of artificial intelligence (AI) tools to improve health equity.

The project, known as “Trustworthy AI to Address Health Disparities in Under-resourced Communities” (AI-FOR-U), is focused on designing a “theory-based, participatory development approach” for building AI tools that can help frontline healthcare workers address disparities in the communities they serve.

For the duration of the project, research teams will work to develop and implement AI and machine learning tools designed to enhance the explainability and fairness of risk prediction models. These tools will then be assessed within the context of behavioral health, cardiometabolic disease and oncology. From there, researchers will measure users’ trust in the tools.

“We will combine theory-driven community engagement with the application and testing of trust-enhancing algorithms in the tool development,” explained Qing Zeng, PhD, professor of clinical research and leadership, director of GW’s Biomedical Informatics Center (BIC) and co-director of Data Science Outcomes Research at the Washington DC Veterans Affairs Medical Center, in the news release. “The clinical use cases outcomes will be driven and selected by our partners and stakeholders. In the preparation of the project, a few risk prediction models, have emerged as shared high priorities for our partners.”

The research team will collaborate with seven community partners serving Latino, Black, LGBTQ+, immigrant and lower-socioeconomic status communities in Maryland, Virginia and Washington, D.C.: Alexandria City (Virginia) Public Schools, Apple Discount Drugs, the Organization of Chinese Americans-DC, Saint Elizabeths Hospital, Unity Healthcare, Virginia State University and Whitman Walker Health.

These organizations will take part in community surveys, focus groups and interviews to provide feedback on the project’s AI tools.

“The continuing implementation of artificial intelligence in health care will have profound effects on both our methods of treating patients and on the development of solutions for many of our pressing issues,” said T. Sean Vasaitis, PhD, dean and professor in the UMES School of Pharmacy and Health Professions. “While we recognize the potential for great benefit inherent in these technologies, we also understand our responsibility to ensure that the use of AI does not increase health care inequity or lead to improper patient care through reliance on unrepresentative datasets. Additionally, there is a need to improve the AI user's understanding of how and why AI generates a response. We need to be able to trust the answers, and we need a way to judge how accurate the answers are likely to be. The AI-FOR-U project is designed to address these concerns by creating trustworthy AI applications that meet the needs of health care workers in underserved and underrepresented populations.”

The work is part of a larger effort spearheaded by the Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) and NIH to tackle the issue of trustworthy AI development in the context of health equity.

The research aims to take advantage of GW’s experience in healthcare AI development and UMES’s expertise in health disparity research.

The AI-FOR-U project’s launch comes as a growing body of research shows that many AI models perform poorly on non-white populations.

A research team from the University of Pennsylvania, Philadelphia, and the National Institute on Drug Abuse (NIDA) recently found that AI models to predict depression severity using language from individuals’ social media posts may generalize well to white American populations, but not Black ones.

The study was predicated on evidence that depression and language use are correlated and that demographic features such as age and gender significantly impact language use. However, research into the potential relationship between language and depression and how it may be affected by race is limited.

To address this, the researchers evaluated the impact of race on the depression-language association using AI. The results of the analysis revealed that these models perform significantly better on white participants than they do on Black individuals, highlighting the need for additional research into the role of depression in the expression of natural language across diverse groups.