Analytics in Action News

Privacy Protection Key for Using Patient Data to Develop AI Tools

Using patient data for secondary purposes, like training artificial intelligence algorithms, will require developers to protect patients’ privacy.

Privacy protection key for using patient data to develop AI tools

Source: Thinkstock

By Jessica Kent

- Clinical data should be treated as a public good when used for research or artificial intelligence algorithm development, so long as patients’ privacy is protected, according to a report from the Radiological Society of North America (RSNA).

As artificial intelligence and machine learning are increasingly applied to medical imaging, bringing the potential for streamlined analysis and faster diagnoses, the industry still lacks a broad consensus on an ethical framework for sharing this data.

“Now that we have electronic access to clinical data and the data processing tools, we can dramatically accelerate our ability to gain understanding and develop new applications that can benefit patients and populations,” said study lead author David B. Larson, MD, MBA, from the Stanford University School of Medicine. “But unsettled questions regarding the ethical use of the data often preclude the sharing of that information.”

To offer solutions around data sharing for AI development, RSNA developed a framework that highlights how to ethically use patient data for secondary purposes.

“Medical data, which are simply recorded observations, are acquired for the purposes of providing patient care,” Larson said.

READ MORE: Artificial Intelligence Can Help Predict Cancer Therapy Response

“When that care is provided, that purpose is fulfilled, so we need to find another way to think about how these recorded observations should be used for other purposes. We believe that patients, provider organizations, and algorithm developers all have ethical obligations to help ensure that these observations are used to benefit future patients, recognizing that protecting patient privacy is paramount.”

The study authors noted that ideally, no financial transactions would be involved in the use of clinical data for research and the development of AI models. However, resources are required to develop and maintain knowledge and models derived from clinical data.

“The necessity of financing AI algorithm development sets up a potential conflict between ethical principles and market forces, with associated questions. First, who should be allowed to profit from the data? Second, how can we ensure that data are used appropriately?” RSNA wrote.

“The answer to the question of who is entitled to profit from secondary uses of clinical data rests on the concept of fairness, which dictates that individuals and entities should profit from the data roughly in proportion to the value of their respective contributions.”

The organization stated that clinical data should be treated as a public good when used for secondary purposes.

READ MORE: Is Artificial Intelligence Mature Enough to Combat COVID-19?

“This means that, on one hand, clinical data should be made available to researchers and developers after it has been aggregated and all patient identifiers have been removed,” said Larson. “On the other hand, all who interact with such data should be held to high ethical standards, including protecting patient privacy and not selling clinical data.”

This idea implies at least two critical imperatives, RSNA said. First, that no single entity is entitled to profit directly from the data, and second, that dissemination and use of the data for development of beneficial knowledge should be encouraged and facilitated.

“The dissemination of deidentified clinical data to all qualifying stakeholders diminishes the value of the data to any single stakeholder because it prevents any single entity from holding a monopoly on the resource. Thus, sharing data through exclusive contracts or licenses is unacceptable because the use of clinical data by one entity would preclude others from using it,” the authors said.

Additionally, RSNA emphasized that providers and patients should monitor and protect data and make sure it’s being used for appropriate purposes.

“No one ‘owns’ the data in the traditional sense—not even the patients themselves. Rather, all who interact with or control the data have an obligation to help ensure that the data are used for the benefit of future patients and of society, including AI developers,” the authors said.

READ MORE: DoD Adopts Ethical Principles for Artificial Intelligence Use

When sharing data with outside entities, organizations should make sure that they meet certain conditions, including that individual patient privacy is protected and safeguarded at all times. If a patient’s name was unintentionally made visible in the data, the receiver of the information would be required to notify the party sharing the data and to discard the data as directed.

“We strongly emphasize that protection of patient privacy is paramount. The data must be de-identified. In fact, those who receive the data must not make any attempts to re-identify patients through identifying technology,” Larson said.

“We extend the ethical obligations of provider organizations to all who interact with the data.”

With these ethical principles, RSNA is highlighting the role patient privacy will play in data sharing and algorithm development.

“We hope this framework will contribute to more productive dialogue, both in the field of medicine and computer science, as well as with policymakers, as we work to thoughtfully translate ethical considerations into regulatory and legal requirements,” Larson said.