Analytics in Action News

Achieving Clinical AI Fairness Requires Multidisciplinary Collaboration

A recently published perspective argues that developing equitable and unbiased AI in healthcare will require clinicians and industry experts to collaborate.

ethical AI in healthcare

Source: Getty Images

By Shania Kennedy

- In a perspective published recently in npj Digital Medicine, researchers led by a team at Duke-National University of Singapore (NUS) Medical School asserted that the pursuit of fair artificial intelligence (AI) algorithms in healthcare will require significant collaboration among experts across industries and disciplines.

AI has been shown to have potential for a wide range of healthcare applications, but these are limited by concerns about biases in these technologies.

The research team noted that fair AI models are expected to perform equally well across subgroups like age, gender, and race. However, performance differences do not always indicate bias or unfairness, as in the medical context, some meaningful differences between patient groups must be taken into account.

“Focusing on equity—that is, recognising factors like race, gender, etc., and adjusting the AI algorithm or its application to make sure more vulnerable groups get the care they need—rather than complete equality, is likely a more reasonable approach for clinical AI,” said Ning Yilin, PhD, a research fellow with the Centre for Quantitative Medicine (CQM) at Duke-NUS and a co-first-author of the paper, in a press release. “Patient preferences and prognosis are also crucial considerations, as equal treatment does not always mean fair treatment. An example of this is age, which frequently factors into treatment decisions and outcomes.”

Focusing on AI fairness can be a challenge because meaningful differences between patient groups can be treated as biases to be corrected, the researchers indicated. Multiple metrics exist to try to address this challenge and ensure model fairness, but choosing the correct methods for a healthcare application can be difficult.

To combat this, the researchers recommended assessing which patient attributes are considered “sensitive” for each AI application. Clinicians can play a vital role in this process, as they can help provide additional context and determine if variables represent systemic biases or biological differences.

To this end, the research team further suggested that collaboration among clinicians, ethicists, and AI experts is necessary to ensure AI fairness in healthcare.

"Achieving fairness in the use of AI in healthcare is an important but highly complex issue. Despite extensive developments in fair AI methodologies, it remains challenging to translate them into actual clinical practice due to the nature of healthcare – which involves biological, ethical and social considerations. In order to advance AI practices to benefit patient care, clinicians, AI and industry experts need to work together and take active steps towards addressing fairness in AI,” said co-author and associate professor Daniel Ting, PhD, director of SingHealth’s AI Office and associate professor from the SingHealth Duke-NUS Ophthalmology & Visual Sciences Academic Clinical Programme.

The perspective highlights a broader push in the healthcare industry for AI stakeholders to work together to make sure that AI is ethical and equitable.

In an interview with HealthITAnalytics last year, leaders from Duke, Mayo Clinic, and law firm DLA Piper argued that to enable responsible AI deployment in healthcare, industry standards and cross-functional collaboration are needed.

The three organizations, in partnership with multiple other stakeholders, established the Health AI Partnership in December 2021 to guide organizations in navigating the AI software market and establishing best practices for responsible AI deployment.

These best practices, they noted, involve thoroughly understanding healthcare AI’s use cases, considering the associated risks, and committing to collaboration.