Healthcare Analytics, Population Health Management, Healthcare Big Data

Analytics in Action News

Are Google’s Predictive Analytics Worth Patient Privacy Risks?

A Google company offering predictive analytics to the NHS is finding itself in hot water after a data-sharing agreement raises questions over patient privacy and data use.

- The healthcare industry hasn’t always been kind to Google, which has dabbled in the shallows of patient data and personalized health for several years now, but a promising new contract with the United Kingdom’s National Health Service (NHS) may provide the tech titan with the predictive analytics proving ground it has been looking for.

Google raises predictive analytics and patient privacy concerns

A data-sharing agreement between DeepMind, a Google-owned predictive analytics company, and the Royal Free NHS Trust will foster the development of an app that improves the detection and treatment of acute kidney injuries, a major cause of emergency department visits, hospitalizations, and patient deaths.

DeepMind will have access to data on more than 1.6 million patients who visit three London hospitals each year to support its Streams app, a clinical decision support tool that sends care alerts and allows providers to access test results in near real-time.

Such a large data pool seems like an important part of developing meaningful algorithms to target high-risk patients, but the big data deal has stirred up some significant concerns about the broad range of clinical history and personally identifiable information (PII) included in the process, and way that technology giants like Google should handle sensitive data. 

“The Royal Free London approached DeepMind with the aim of developing an app that improves the detection of acute kidney injury (AKI) by immediately reviewing blood test results for signs of deterioration and sending an alert and the results to the most appropriate clinician via a dedicated handheld device,” says an NHS FAQ.

READ MORE: Big Data Predictive Analytics Score Flags 30-Day Readmissions

“AKI affects more than one in six in-patients and can lead to prolonged hospital stays, admission to critical care units and, in some cases, death. The Streams app improves the detection of AKI by immediately reviewing blood test results for signs of deterioration and sending an alert and the results to the most appropriate clinician.”

But the NHS has provided DeepMind with access to much more than just the clinical information that could help predict a downturn in a renal patient, New Scientist revealed when it published the official contract between the two parties. 

DeepMind will have access to HL7 feeds that include all admission, discharge, and transfer data, pathology and radiology results, not to mention access to information from the Secondary Uses Service (SUS), the comprehensive big data repository that supports analytics and reporting for the entire NHS.

The contract permits DeepMind to collect and store the “last five years of archival data of all of the above, in any defined format, to aid service evaluation and audit” of its product.  The document also states that because the information is being utilized for “direct patient care purposes,” DeepMind is not required to replace PII with pseydonymised data.

Under its information governance guidelines, the Royal Free London Trust uses an opt-out approach to patient data sharing, and provides information about how patients can withdraw their “implied consent” to having their data shared with third-party organizations. 

READ MORE: Twitter, EHR Big Data Help Track Flu with Predictive Analytics

The Trust notes, however, that the DeepMind project is not unique: more than 1500 other third-party organizations have “undergone similar NHS information governance processes.”


Read: Lawsuit Claims Facebook Compromises Patient Privacy


DeepMind is using the NHS’ standard data sharing agreement, and has pledged to follow good information governance guidelines, including the destruction of all PII in September 2017, when the project ends.  Only DeepMind staff who have undergone data governance training will be allowed to access the information, and data will not be stored or processed at the company’s London offices.

And there seems to be little question that the predictive analytics apps have the potential to significantly improve care.  Before New Scientist started to ring the patient privacy warning bell, news outlets in the UK praised DeepMind and its partner, Hark, for publishing research showing just how well the product worked.

In February, the Guardian published an interview with Hark leader Ara Darzi, a former health minister whose peer-reviewed study found that clinicians who used the Hark app responded 37 percent faster to deteriorating patients than clinicians who used pagers.

READ MORE: Big Data Analytics Improves Chronic Disease Risk Stratification

In May, the same paper released a piece accusing DeepMind and Google, “a sprawling octopus of a company with tentacles in all our lives,” of heralding the destruction of the human race by obliterating the right to privacy and using secret documents to deprive patients of the ability to choose whether or not to contribute their personal information.

What changed?  Little more than the realization that large and seemingly opaque companies like Google, Apple, Microsoft, IBM, and their partners are now in the best position to leverage their sophisticated analytics competencies to help clinicians make better decisions – and that the healthcare system will always require patients to evaluate the trade-off between absolute privacy and the best possible clinical care.

Patients must constantly weigh the risks of sharing information, even within their ostensibly confidential patient-provider relationship.  Aside from some situations where patients lie or omit information in order to avoid guilt or shame, the American public, at least, is generally on board with the idea that the rewards of big data sharing are worth the risks.

A 2014 survey by the Office of the National Coordinator found that 70 percent of patients would put aside their security fears to engage in health information exchange that might improve their outcomes. 

And a 2015 poll by Rock Health stated that 59 percent of patients were willing to share their data for medical research.  Eighteen percent said they felt comfortable with sharing their personal information with technology companies.  Google ranked as the most-trusted tech entities, beating out Facebook two-to-one.

Perhaps even more surprisingly, thirty-nine percent said they would release their data for no other reason than receiving cold, hard cash for it. 

Does this mean that media-driven fears about patient privacy in the era of big data analytics and precision medicine are overblown?  Yes and no. 

First of all, it may be important to note that the vast majority of health data breaches over the past few years have taken place at healthcare providers and insurance companies, not analytics developers. 

But that doesn’t mean that software companies are immune from misusing data or abusing their access to sensitive information – and there’s no guarantee that a vendor with petabytes of patient data at its disposal won’t feature in the next big data breach headline. 

Neither are companies like Google exempt from concerns that they are collecting and storing much, much more information than they let on, and consumers are not always aware that their tweets, status updates, and check-ins can be mined to reveal much more about their health status than they realize.

In the same Rock Health poll that indicated a broad willingness to share information, 92 percent participants stated that believed they should be in complete control of their decision-making when it comes to health data, which suggests a significant disconnect between the ideal state of the healthcare system and its harsh reality.


Read: Precision Medicine Sparks New Conversations about Patient Data


As large-scale public programs like the Precision Medicine Initiative and similarly weighty private projects start to reveal the depth of the research community’s thirst for patient data, companies with analytics on their minds will need to carefully chart a course through the conflicting attitudes of their primary consumer targets.

Transparency seems to be particularly important when attempting to engender the public’s trust, as evidenced by how quickly the media turned on DeepMind as soon as they discovered the company might be withholding something about their activities.   

Analytics developers should be upfront about what data will be collected, how it will be stored, and what it will be used for, and they should also provide accessible, easy-to-follow instructions for opting out of these partnerships. 

Multiple recent studies have shown that opt-out recruitment still produces high rates of enrollment in research applications, even when the opt-out mechanism is very prominently presented to patients, meaning that analytics programs have little to lose from being up front about advertising the option – and they have much to gain from their honesty.

Ensuring that patients know what data is being collected and how it will be used will only become more important as the healthcare system embraces advanced analytics as a primary mechanism for decision-making. 

As consumers continue to view developers with suspicion and mistrust, not even the most effective and game-changing clinical decision support application can succeed without taking a transparent and proactive approach to patient data collection and use.

X

Join 25,000 of your peers

Register for free to get access to all our articles, webcasts, white papers and exclusive interviews.

Our privacy policy

no, thanks