Features

The Clinical Promise and Ethical Pitfalls of Electronic Phenotyping

Healthcare big data mining can provide valuable insights into a patient’s health but may reveal non-disclosed information, raising concerns about patient privacy.

Source: Getty Images

- As information technology (IT) methods for disease surveillance, predictive analytics, and clinical decision support become more advanced, big data mining will be crucial to ensure tools use large, high-quality datasets.

Data mining can enable electronic phenotyping— a process to query electronic health records (EHRs) and clinical information systems to extract patient characteristics or conditions for research purposes.

While electronic phenotyping can be useful for use cases such as identifying patients with a particular disease for recruitment in a clinical trial, it can also reveal sensitive patient characteristics not disclosed to a clinician, such as transgender identity.

The revelation of non-disclosed characteristics via electronic phenotyping raises multiple ethical concerns around patient consent, data use, and clinician bias.

To address these challenges, experts propose that ethical guidelines be updated to balance patient autonomy and privacy with the broader potential benefits of data sharing.

In this primer, one such expert — Kenrick Cato, PhD, RN, CPHIMS, FAAN, a professor of Informatics at the University of Pennsylvania and Nurse Scientist – Pediatric Data and Analytics at the Children's Hospital of Philadelphia — sat down with HealthITAnalytics to discuss his research in the space and how healthcare organizations can improve their approach to electronic phenotyping.

WHAT IS ELECTRONIC PHENOTYPING?

At its core, electronic phenotyping is designed to flag patients with particular clinical characteristics.

Duke University defines a phenotype as a measurable cognitive, behavioral, or biological marker more often found in patients with a condition or disease than in those without.

Electronic phenotyping often centers around computable phenotypes, or “a clinical condition, characteristic, or set of clinical features that can be determined solely from the data in EHRs and ancillary data sources and does not require chart review or interpretation by a clinician.”

These phenotypes are also sometimes called EHR condition definitions or EHR-based phenotype definitions.

These definitions are typically comprised of logic expressions and data elements interpreted and executed by a computer without human intervention. These can be used to develop phenotyping algorithms, which can be integrated into EHR systems.

Duke further indicates that standardized computable phenotypes can enable large-scale, multi-health system clinical trials while ensuring reproducibility and reliability.

The information leveraged during the electronic phenotyping process can be sourced from routinely collected EHR data alongside ancillary data sources such as claims data, billing information, or disease registries.

Nurses play a critical role in ensuring that these data are accurate and useful for applications like clinical decision support, Cato noted, explaining that much of his work centers around modeling electronic patient data to help provide decision support for clinicians, patients, and caregivers.

“A lot of the data entered into electronic systems [are] entered for billing and regulatory reasons. When people want to make decisions, it's sometimes hard,” he explained. “I’ve spent a lot of time modeling mostly nursing data because most of the data in the EHR is entered by nurses.”

These data, he continued, can provide a wealth of insights into a patient's health. Since nurses typically spend the most time with patients in an inpatient setting, the information entered by nursing staff is likely to provide a more accurate, real-time look into a patient’s state.

These insights are extremely valuable across use cases and healthcare settings, including electronic phenotyping.

However, to effectively utilize electronic phenotyping, stakeholders must avoid multiple ethical pitfalls.

TACKLING ETHICAL CHALLENGES

In a 2017 paper published in the Journal of Empirical Research on Human Research Ethics, Cato and colleagues highlighted that while electronic phenotyping has significant potential to improve clinical decision support, predictive analytics, and disease surveillance, it can also create opportunities for harm.

Through two clinical vignettes, one using the person-based characteristic of transgender identity and the other using the health history-based characteristic of substance abuse, the researchers illustrated that phenotyping algorithms integrated into EHRs can lead to clinicians asking a patient questions they would not have otherwise asked if the tool had not revealed non-disclosed patient characteristics.

These vignettes allowed the research team to identify multiple ethical issues associated with the discovery of these characteristics via electronic phenotyping: the patient’s ability to consent to the use of their data, the possibility of a clinician’s conscious or unconscious biases becoming a factor in care delivery, making sure that the benefits electronic phenotyping justify the risks of potential patient harm, and the ability to suppress pediatric data.

To tackle these challenges, the researchers recommended updating ethical guidelines around electronic phenotyping to support principles of respect for persons, beneficence, and justice established by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research.

However, Cato indicated that since the paper’s publication, some of the challenges associated with electronic phenotyping have shifted.

“When I wrote that paper, I was working on a project where we were creating phenotypes of transgender patients, simply because at that time we weren't capturing transgender status in the EHR,” he explained.

Capturing transgender identity was difficult because of poorly configured EHR systems, limited clinician –patient interaction time in outpatient settings, and a lack of training among clinicians to address gender-related topics, Cato continued.

But, capturing transgender identity accurately in the EHR is an integral part of ensuring that these patients do not experience health disparities, such as access barriers, that lead to adverse outcomes.

By identifying transgender patients who may be at risk, clinicians could be better positioned to provide adequate care and address care gaps for their patients.

However, recent political developments have given Cato pause.

“I’ve kind of paused in the work because I've seen the negative potential when you think about the emerging political environment where transgender individuals have higher rates of suicide,” he explained.

For example, if a life insurance company found out that one of its members is transgender, the company could increase that individual’s rates, he continued. While this example is seemingly innocuous, Cato further indicated that there are worse examples depending on where people are in the US or what the political climate is like.

In the current US political climate, where gender-affirming care is increasingly targeted by state legislatures despite broad support from the medical community, prediction tasks around transgender identity are significantly more serious, adding another layer to the ethical considerations laid out in the research team’s 2017 work.

In this context, Cato emphasized that while Institutional Review Boards (IRBs) have pivoted to better scrutinize research around electronic phenotyping to minimize patient harm, involving community members affected by the research is critical.

“[The healthcare system] needs to do a better job of including community members in the ethical process of reviewing this work,” he postulated. “I know it's hard, but it's important…it affects people's lives in a serious way. And not only that, it adds a [different] perspective to the research, as individuals who don't have the lived experience of the harm that could be caused don't think about it.”

Along with involving members of the community in ethical reviews of relevant studies, Cato also underscored several other ways healthcare organizations could work to improve their electronic phenotyping research.

ADOPTING AN IMPROVED APPROACH

Health systems face various challenges when trying to improve their approach to electronic phenotyping.

A major hurdle is resource allocation, as large hospitals must devote significant resources to provide specialized care, acquire and maintain expensive equipment, and pay their workforce. This can leave little to invest in other areas, like electronic phenotyping and other advanced tools.

Outdated or weaker ethical frameworks for patient privacy also have a role to play.

“Unfortunately, when [comparing] the ethical frameworks of, say, the [European Union] to the United States, they're different,” Cato explained. “There's much more of a focus on protecting individuals in the EU than there is in the US…There's much more trying not to stifle business in the US.”

While balancing business interests with patient interests in the US healthcare system is not a new challenge, Cato underscored that a commitment to protect patients on a national scale could help solve some of these issues.

“It's difficult because I don't know of any clinician that's not focused on helping patients, but there's a larger economic pressure. But I don't expect these things to change. I don't expect someone in an individual hospital to make these changes. These are policy changes that need to happen.”

“Because if an individual hospital says, ‘I'm going to do the right thing,’ and that puts them out of business, it's not a rational [decision],” he continued. “It's not supposed to happen on that level. It really needs to happen on a national policy level.”

Cato further noted that efforts to combat biases in data and artificial intelligence (AI) models are needed, as these can reproduce bias in EHRs and other clinical data, clinical workflows, and the AI tools themselves.

Detecting bias and adding sufficient guardrails to protect against it, however, remains challenging because those developing the AI systems cannot always say with complete certainty how a model is learning from the data or generating outputs.

Prioritizing the involvement of more diverse individuals and teams during these processes can help circumvent some of these concerns.

Despite these challenges, Cato posited that electronic phenotyping could help healthcare stakeholders identify underrepresented patient populations and better study relevant interventions.

“One of the other reasons I was interested in phenotyping transgender individuals is that we don't know a lot about, say, long-term exposure to hormone treatment,” he said. “We just don't know. We've got better cohorts now, but they're still relatively limited. They still have selection bias because they're the people who volunteered for the cohorts. I still see the potential in those types of phenotyping to learn lots of things.”

Cato also highlighted that clinicians are overworked, and tools like electronic phenotyping may help streamline their workload and allow them to focus on their patients.

Prioritizing people throughout the tool development and deployment process can help avoid some of the major pitfalls along the way, he explained.

“We really need to make sure that we incorporate individuals into the regulatory process that can speak to their lived experience of how some of this stuff might affect them. And then also, we have to, with technology, always center people [and] what's good for people. And if we do that, then we don’t ever have to worry about all the doomsday scenarios that people are throwing around.”