Tools & Strategies News

Foundation Models Could Help Advance AI in Healthcare

Experts argue that a new class of models may lead to more adaptable, affordable healthcare artificial intelligence tools.

a galaxy made of binary code

Source: Getty Images

By Shania Kennedy

- In a blog post published last week, experts from Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI) discussed the opportunities provided by foundation models for AI in healthcare.

Foundation models are AI models trained on large, unlabeled datasets designed to be highly adaptable to new applications, according to the 2011 paper in which researchers coined the term. These models draw from fundamental ideas related to deep learning with two key differences: foundation models don’t require labeled datasets for model training, and they leverage a pretraining process to improve adaptability and sample efficiency.

The authors posited that these differences could help foundation models advance AI in healthcare by providing opportunities to address gaps created by current models.

The authors stated that despite the widely held notion that data from EHRs can be used to build classification, prediction, and survival models, most models trained on these data do not currently translate into clinical gains. They further noted that the resources, financial and otherwise, required to create and manage AI models are unsustainable for health systems.

Foundation models can help address these issues and create model-guided care workflows with the potential to significantly improve care and outcomes by providing opportunities related to AI adaptability with fewer manually labeled examples; developing modular, reusable, and robust AI; making multimodality the new normal; creating new interfaces for human-AI collaboration; and easing the cost of developing, deploying, and maintaining AI in hospitals, the authors argued.

READ MORE: What Is Deep Learning and How Will It Change Healthcare?

AI adaptability is a major challenge because most health AI models are trained for a single purpose using a mix of more stable biological inputs and more varied operational inputs, they explained. These models often have poor generalizability, limiting their clinical use. To combat this, some models are retrained on local hospital data to ensure quality performance with that patient population.

However, this can create complexity, cost, and personnel barriers to leveraging AI.

“This is where foundation models can provide a mechanism for rapidly and inexpensively adapting models for local use. Rather than specializing in a single task, foundation models capture a wide breadth of knowledge from unlabeled data. Then, instead of training models from scratch, practitioners can adapt an existing foundation model, a process that requires substantially less labeled training data,” the authors stated in the post.

Since foundation models are trained on massive datasets, however, a potential issue can arise in terms of building upon existing work and fostering innovation. The authors noted that the expense of training foundation models could prevent widespread adoption and use, but they also stated that sharing models and leveraging their ability to easily adapt to new tasks helps mitigate these challenges under a framework for modular, reusable, and robust AI.

The authors explained that normalizing multimodality will be key to advancing AI in healthcare.

READ MORE: Responsible AI Deployment in Healthcare Requires Collaboration

“Today’s medical AI models often make use of a single input modality, such as medical images, clinical notes, or structured data like ICD codes. However, health records are inherently multimodal, containing a mix of provider’s notes, billing codes, laboratory data, images, vital signs, and increasingly genomic sequencing, wearables, and more. The multimodality of EHR is only going to grow,” they stated. “No modality in isolation provides a complete picture of a person’s health state. Analyzing pixel features of medical images frequently requires consulting structured records to interpret findings, so why should AI models be limited to a single modality?”

Alternatively, foundation models can combine multiple modalities during training, which may lead to better representations of patient health for use in downstream applications and more paths for clinicians to interact with AI.

As a result, foundation models could help support human-AI collaboration.

“Current healthcare AI models typically generate output that is presented to clinicians who have limited options to interrogate and refine a model’s output. Foundation models present new opportunities for interacting with AI models, including natural language interfaces and the ability to engage in a dialogue,” they stated.

This, in turn, can put human-AI collaborations front and center while improving generalization by building collections of natural language instructions based on questions that clinicians generate while interacting with EHRs.

READ MORE: Value of an Evidence-Based AI Development and Deployment Approach

Finally, the authors highlighted how foundation models might ease the cost of developing, deploying, and maintaining AI in hospitals.

They noted that often, developing, deploying, and maintaining a classifier or predictive model for a single clinical task can cost up to $200,000. Commercial solutions can also fall short because vendors typically charge health systems on a per-model or per-prediction basis.

Instead, the authors argued that healthcare needs a better paradigm where instead of a one model or project per use-case mindset, stakeholders focus on creating models that are cheaper to build, have reusable parts, can handle multiple data types, and are resilient to changes in the underlying data.

“By lowering the time and energy required to train models, we can focus on ensuring that their use leads to fair allocation of resources with the potential to meaningfully improve clinical care and efficiency and create a new, supercharged framework for AI in healthcare. Adopting the use of foundation models is a promising path to that end vision,” they concluded.

The article is part of a larger healthcare AI series HAI is publishing, with past pieces focused on AI utility, explainability, generalizability, what healthcare leaders need to know, if health AI delivers on what it promises, the role of deidentification in protecting privacy, and ensuring that healthcare algorithms are fair.