Features

Assessing AI, Data Use Key Priorities of Stanford’s First Data Chief

Dr. Nigam Shah, Stanford Health Care's inaugural data chief, discussed in an interview the importance of data access and governance, AI use in healthcare and his top three priorities in his new role. 

Source: Thinkstock

- As new modes of data analysis evolve and become increasingly integrated into clinical care, healthcare organizations are looking to get ahead of the curve and solidify their approach to data science.

Stanford Health Care in California is no exception. Earlier this month, the health system created a chief data scientist position and selected Nigam Shah, MBBS, PhD, to serve in the new role, effective March 1.

A professor of medicine and biomedical data science, Shah will lead Stanford's artificial intelligence and data science efforts.

In a wide-ranging phone interview with HealthITAnalytics, Shah discussed his views on critical healthcare data issues, the use of AI, including why AI concerns should not necessarily center on the potential for bias, and his three main priorities in his new role.

On healthcare data science

Healthcare data issues are widespread, and they vary based on the type of data, Shah said.

"We shouldn't be thinking of data as one thing," he said. "I mean, there's imaging data, there's EHR data, there's waveform data, there are genomics data, gene expression data — all of them have different issues that you have to manage."

For example, with regard to imaging data, the issues center on storage, whereas with race and ethnicity information, challenges are typically related to data collection.

With the latter, there is not much to do on the backend if incorrect information is collected on the frontend, Shah said.

"So, what I would say is that the priority is to make sure that the data are available for use," he added.

Shah plans to work closely with Stanford's chief software and chief enterprise architect to ensure that all necessary data items are addressable and accessible by an application programming interface.

Following data access, governance is the next big issue that providers must be aware of.

"Once the technical work is done, you run into [these] legal, ethical, moral issues about what is okay to do with somebody else's data," Shah said. "And that requires strong governance. What is the fiduciary responsibility of a health system regarding my data? And reasonable people have different views on this. So, figuring out a process for governance, I think, is also a top priority."

On AI use in clinical care

Stanford aims to use AI to advance the science and practice of medicine and healthcare delivery, and Shah has been tasked with leading these efforts.

There are many ways AI can support both clinician and administrative processes in healthcare.

For example, AI algorithms can be applied to specific conditions with subtypes that have different clinical outcomes.

"In order to advance the clinical care, two things need to happen — we need a test, either an algorithm or a laboratory test, where a new patient walks in, and we can figure out are they subtype one, two, or three? And then second, we need the ability to treat the three subtypes differently," Shah said. "And then we advance healthcare delivery."

AI can be applied to both processes — helping identify subtypes and determining the treatment for each subtype. But it remains to be seen which will ultimately be more helpful in improving patient care, Shah said.

On the other hand, AI can be more immediately impactful in revamping burdensome administrative processes, like note summarization, transcription, and patient intake.

"At most health systems, the first interface [the patient] has is with a piece of paper and a clipboard," Shah said. "Like, why? In this day and age?"

He added that these processes represent low-hanging fruit where AI can be leveraged.

Though AI use in healthcare is rising, there have been concerns about bias potentially creeping into AI-based methods or algorithms.

A study published last year showed that deep-learning models trained on large cancer genetic and tissue histology datasets could easily identify the institution that submitted the images and then use the submitting sites as a shortcut to predicting outcomes for the patient rather than relying on the patient's biology.

Another study, published in 2019, found that predictive analytics algorithms referred healthier white patients to care management programs at higher rates than less healthy black patients.

"The issues being raised are definitely real," Shah said. "And it is important in healthcare's adoption of AI to be aware of these issues so that we don't, sort of, stumble on them. But as of right now, [AI] use is just not widespread enough."

Essentially, it is like discussing TSA protocols when the very first flight conducted by the Wright brothers took place, he explained.

"[The potential for bias in AI] will definitely become a problem, but when the Wright brothers flew their plane for the first time, who thought that seat selection was going to be a problem?" he added.

The issues related to AI that are more concerning are the capacity constraints under which healthcare operates, according to Shah.

For instance, AI may help identify patients at risk of readmission, but if the hospital doesn't have the capacity to care for them, having the information doesn't help.

"A prediction that doesn't change actions is useless," Shah said. "And what I don't see people talking about is what are the capacity constraints of healthcare systems — can they do anything with whatever the AI tells [them]."

On his priorities as data chief

One month into his new role, Shah has clear-cut priorities. They fall into three buckets: taking inventory, developing usefulness, reliability, and fairness assessments, and accelerating innovations.

First, Shah plans to take inventory of the different clinical scenarios in which an algorithmic solution is used.

"We just need to know what it is that we're using and whether it's working or not," he said.

Next, he will work on developing a systematic process for assessing the usefulness, reliability, and fairness of such solutions, whether they were created in-house or bought from vendors.  

"If you think about algorithms, they help you risk stratify or recommend some action," Shah said. "And then based on that, some human being in the healthcare system has to take action. And the usefulness and reliability and fairness — all of that depends on what that action is, how often it is successfully completed, and what is our capacity to take such actions."

Thus, there needs to be a process or framework to assess usefulness, reliability, and fairness, he added.

Lastly, Shah will focus on supporting Stanford researchers and faculty working to develop innovative technologies and models.

To aid his efforts, Shah will collaborate with several academic groups at the Stanford School of Medicine, such as the Center for Biomedical Informatics Research, the Department of Biomedical Data Science, and the Center for AI in Medicine and Imaging.