Tools & Strategies News

Biases in Artificial Intelligence Led to Healthcare Disparities

Researchers from the US and China note that several biases found in artificial intelligence design perpetuate healthcare disparities.

Source: Getty Images

By Mark Melchionna

- Although artificial intelligence (AI) continues to become a critical aspect of healthcare, researchers from PLOS digital health biases and data gaps involved in AI production can lead to healthcare disparities.

Over time, AI increasingly uncovers ways to decrease the frequency of clinical errors and improve outcome predictions.

Although these methods have proven effective in various situations, many have been created from a biased point of view, leading to skewed results.

In this study, researchers gathered studies from the US and China, a sample that included 7,314 articles.

When segmented by clinical specialty, radiology was the most represented in AI studies at 40.4 percent, followed by pathology at 9.1 percent, neurology, and ophthalmology at 7.4 percent, cardiology at 5.4 percent, and internal/hospital general medicine at 5.2 percent.

There were also differences in the authors’ nationalities. Of the 123,815 authors involved in writing all eligible articles, 24.0 percent came from China, and 18.4 percent came from the U.S. Other countries of origin included Germany, Japan, and the U.K., accounting for 6.5 percent, 4.3 percent, and 4.1 percent.

These differences in clinical study areas and author nationalities indicate the strong potential for biases to exist within AI. Aspects of research that have the potential to strongly affect data include an overrepresentation of radiology, the variation in author demographics, and the lack of consistency in article origins.

Gender and level of author expertise also played a role in the production of bias. Researchers noted that just over half of the authors were data experts, and under half were domain experts. Also, approximately 74.1 percent of authors were male.

Regarding limitations, researchers acknowledge that using data from 2019 alone, disregarding disparities within countries, and the lack of clarity in defining distinct gender and nationality affiliations may have affected the data.

Although several biases within AI can lead to healthcare disparities, various studies have made efforts to limit skewed results.

A January 2020 study showed that, although machine learning can read radiology scans and rapidly evaluate high-risk patients, developers can take steps to increase their efficiency by eliminating bias.

An algorithm created by Philip Thomas, PhD, MS, assistant professor at the college of information and computer science at the University of Massachusetts Amherst, was dedicated to balancing gender fairness and accuracy. His team constructed this algorithm by defining unsafe behaviors and training the tool to omit unwanted knowledge.

Another study from December 2021 discussed the systematic biases associated with AI and how to include underrepresented populations in the future.

Researchers studied information from the University of Michigan and Michigan State University and discovered that most surgical patients were older, White, socioeconomically-advanced men. To increase outreach, researchers plan on using a mail-in-saliva-collection kit, hoping it will lead to increased engagement from various populations.