Quality & Governance News

Stanford Launches Responsible Health Artificial Intelligence Initiative

RAISE-Health, Stanford’s new health AI initiative, will focus on responsible innovation, safety, and ethical considerations in the growing field.

responsible artificial intelligence in healthcare

Source: Getty Images

By Shania Kennedy

- Stanford Medicine and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) launched the Responsible AI for Safe and Equitable Health (RAISE-Health) initiative this week, which aims to tackle safety and ethical issues around the use of artificial intelligence (AI) in healthcare.

Defining the responsible use of health AI has been a challenge for stakeholders, which has led some to develop ethical AI standards and call for an evidence-based based AI development and deployment approach.

Others are indicating that collaboration is key to responsible health AI deployment, leading to the creation of partnerships like the Coalition for Health AI (CHAI) and questions of federal regulatory guidance, such as how the White House’s Blueprint for an AI Bill of Rights may apply to healthcare.

RAISE-Health seeks to address these challenges by establishing a platform to help define a structured framework for health AI safeguards and standards and convene multidisciplinary experts around the topic regularly.

“AI has the potential to impact every aspect of health and medicine,” said Lloyd Minor, MD, dean of the Stanford School of Medicine and co-leader of the initiative, in the press release. “We have to act with urgency to ensure that this technology advances in line with the interests of everyone, from the research bench to the patient bedside and beyond.”

To this end, RAISE-Health is set to act as a repository for research and collaborations in the health AI space, providing stakeholders with data, tools, models, standards, and best practices.

The goals of the initiative include educating researchers, providers, and patients on how to navigate AI advances; accelerating AI research with the potential to solve the biggest challenges in modern medicine; and using the responsible integration of health AI to improve clinical outcomes.

The press release indicates that while AI has significant potential to transform healthcare, concerns about the safety, ethics, and responsible use of the technology must be addressed to help build public trust.

“AI is evolving at an incredible pace; so, too, must our capacity to manage, navigate and direct its path,” noted Stanford HAI co-director and computer science professor Fei-Fei Li, PhD, who co-leads the initiative alongside Minor. “Through this initiative, we are seeking to engage our students, our faculty and the broader community to help shape the future of AI, ensuring it reflects the interests of all stakeholders — patients, families and society at large.”

Other initiatives are also seeking to address these challenges around the use of AI in healthcare.

In April, CHAI released its ‘Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare,’ which contains recommendations to promote high-quality clinical care and increase trustworthiness within the context of health AI deployment.

The blueprint is the culmination of a year-long effort to assist healthcare stakeholders aiming to advance health AI while simultaneously tackling bias and health equity issues.