Quality & Governance News

Coalition for Health AI Releases Blueprint for Trustworthy AI in Healthcare

CHAI’s blueprint for trustworthy artificial intelligence implementation in healthcare focuses on the care impact, equity, and ethics of these tools.

ethical AI in healthcare

Source: Getty Images

By Shania Kennedy

- The Coalition for Health AI (CHAI) released its ‘Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare’ this week, which outlines recommendations to increase trustworthiness and promote high-quality care within the context of health artificial intelligence (AI) implementation.

The 24-page blueprint is the product of CHAI’s year-long effort to help health systems, AI and data science experts, and other healthcare stakeholders advance health AI while addressing health equity and bias.

"Transparency and trust in AI tools that will be influencing medical decisions is absolutely paramount for patients and clinicians," said Brian Anderson, MD, a co-founder of the coalition and chief digital health physician at MITRE, in a press release detailing the blueprint. "The CHAI Blueprint seeks to align health AI standards and reporting to enable patients and clinicians to better evaluate the algorithms that may be contributing to their care."

The blueprint, a draft of which was published last year, “is designed to be a flexible ‘living document’ will enable us to maintain a continuous focus on these critically important dimensions of algorithmic healthcare,” according to Michael Pencina, PhD, a co-founder of the coalition and director of Duke AI Health.

The blueprint outlines several key elements of trustworthy AI use in healthcare: usefulness, safety, accountability and transparency, explainability and interpretability, fairness, security and resilience, and enhanced privacy.

In addition, CHAI identifies next steps to facilitate the development and use of these tools, which center around health system preparedness and assessment, trustworthiness and transparency throughout an AI tool’s lifecycle, and integrated data infrastructure to support discovery, evaluation, and assurance related to health AI.

The document builds upon the White House’s Blueprint for an AI Bill of Rights, which contains five guidelines for the design, use, and deployment of automated and AI-based tools to protect Americans from harm as such devices become more common across US industries, including healthcare.

CHAI’s blueprint also expands upon the US Department of Commerce’s National Institute of Standards and Technology (NIST) AI Risk Management Framework, which aims to cultivate trust in AI while mitigating risk.

"The needs of all patients must be foremost in this effort. In a world with increasing adoption of artificial intelligence for healthcare, we need guidelines and guardrails to ensure ethical, unbiased, appropriate use of the technology. Combating algorithmic bias cannot be done by any one organization, but rather by a diverse group. The Blueprint will follow a patient-centered approach in collaboration with experienced federal agencies, academia, and industry,” said John Halamka, MD, president of Mayo Clinic Platform and a co-founder of the coalition.

CHAI launched in March 2022 as an initiative to help identify where standards, best practices, and guidance need to be developed for AI-related research, technology, and policy. CHAI membership includes leaders from over 150 organizations, including Johns Hopkins University, Duke AI Health, Mayo Clinic, Google, Microsoft, and Stanford Medicine, in addition to US federal observers from the US Food and Drug Administration (FDA), the Office of the National Coordinator in Health Information Technology (ONC), the National Institutes of Health (NIH), and the White House Office of Science and Technology Policy (OSTP).