Guest Article: The Causes of Digital Patient Privacy Loss in EHRs and Other Health IT Systems

Check out the latest from Shahid Shah, courtesy of The Healthcare IT Guy.

This past Friday I was invited by the Patient Privacy Rights (PPR) Foundation to lead a discussion about privacy and EHRs. The discussion, entitled “Fact vs. Fiction: Best Privacy Practices for EHRs in the Cloud,” addressed patient privacy concerns and potential solutions for doctors working with EHRs.

While we are all somewhat disturbed by the slow erosion of privacy in all aspects of our digital lives, the rather rapid loss of patient privacy around health data is especially unnerving because healthcare is so near and dear to us all. In order to make sure we provided some actionable intelligence during the PPR discussion, I started the talk off giving some of the reasons why we’re losing patient privacy in the hopes that it might foster innovators to think about ways of slowing down inevitable losses.

Here are some of the causes I mentioned on Friday, not in any particular order:

  • Most patients, even technically astute ones, don’t really understand the concept of digital privacy. Digital is a “cyber world” and not easy to picture so patients believe their data and privacy is protected when it may not be. I usually explain patient privacy in the digital world to non-techies using the analogy of curtains, doors, and windows. The digital health IT world of today is like walking into a patient’s room in a hospital in which it’s a large shared space with no curtains, no walls, no doors, etc. (even for bathrooms or showers!). In this imaginary world, every private conversation occurs so that others can hear it, all procedures are performed in front of others, etc. without the patient’s consent and their objections don’t even matter. If they can imagine that scenario, then patients will probably have a good idea about how digital privacy is conducted today — a big shared room where everyone sees and hears everything even over patients’ objections.
  • It’s faster and easier to create non-privacy-aware IT solutions than privacy-aware ones. Having built dozens of HIPAA-compliant and highly secure enterprise health IT systems for decades, my anecdotal experience is that when it comes to features and functions vs. privacy, features win. Product designers, architects, and engineers talk the talk but given the difficulties of creating viable systems in a coordinated, integrated digital ecosystem it’s really hard to walk the privacy walk  Because digital privacy is so hard to describe even in simple single enterprise systems, the difficulty of describing and defining it across multiple integrated systems is often the reason for poor privacy features in modern systems.
  • It’s less expensive to create non-privacy-aware IT solutions. Because designing privacy into the software from the beginning is hard and requires expensive security resources to do so, we often see developers wait until the end of the process to consider privacy. Privacy can no more be added on top of an existing system than security can — either it’s built into the functionality or it’s just going to be missing. Because it’s cheaper to leave it out, it’s often left out.
  • The government is incentivizing and certifying functionality over privacy and security. All the meaningful use certification and testing steps are focused too much on prescribed functionality and not enough on data-centric privacy capabilities such as notifications, disclosure tracking, and compartmentalization. If privacy was important in EHRs then the NIST test plans would cover that. Privacy is difficult to define and even more difficult to implement so the testing process doesn’t focus on it at this time.
  • Business models that favor privacy loss tend to be more profitable. Data aggregation and homogenization, resale, secondary use, and related business models tend to be quite profitable. The only way they will remain profitable is to have easy and unfettered (low friction) ways of sharing and aggregating data. Because enhanced privacy through opt-in processes, disclosures, and notifications would end up reducing data sharing and potentially reducing revenues and profit, we see that privacy loss is going to happen with inevitable rise of EHRs.
  • Patients don’t really demand privacy from their providers or IT solutions in the same way they demand other things. We like to think that all patients demand digital privacy for their data. However, it’s rare for patients to choose physicians, health systems, or other care providers based on their privacy views. Even when privacy violations are found and punished, it’s uncommon for patients to switch to other providers.
  • Regulations like HIPAA have made is easy for privacy loss to occur. HIPAA has probably done more to harm privacy over the past decade than any other government regulations. More on this in a later post.

The only way to improve privacy across the digital spectrum is to realize that health providers need to conduct business in a tricky intermediary-driven health system with sometimes conflicting business goals like reduction of medical errors or lower cost (which can only come with more data sharing, not less). Digital patient privacy is important but there are many valid reasons why privacy is either hard or impossible to achieve in today’s environment. Unless we intelligently and honestly understand why we lose patient privacy we can’t really create novel and unique solutions to help curb the loss.

What do you think? What other causes of digital patient privacy loss would you add to my list above?

Courtesy of The Healthcare IT Guy.

Everyone expects information they share to be used only once, for one purpose.

This expectation is not a surprise. This ethical principle is called  ‘single use’ of data.

Humans expect to set and regulate personal boundaries in relationships with others.  We only trust people and institutions that don’t share sensitive personal information without asking us first.

People don’t trust governments or corporations that violate their expectations and rights to privacy, ie, rights to control the use of personal data.

When the US public realizes their rights to health information privacy are violated by hidden government  and corporate use and sale of their most intimate, sensitive information: health data, from prescriptions to diagnoses to DNA—the fallout will be far more devastating than the NSA revelations.

After all, Americans expect some level of government surveillance to protect us from terrorism, but the hidden collection and sale of health data by industry and government is very different: it completely shatters trust in the patient-physician relationship. The lack of trust in electronic health systems already causes 40-50 million people to delay or avoid treatment for serious illnesses, or to hide health information. Current technology causes bad health outcomes.

The Internet and US health technology systems are currently designed to violate human and civil rights to privacy.  The Internet and technology must be rebuilt to restore trust and restore our rights to control personal information.

Deb

Experts tout Blue Button as enabling information exchange between medical provider and patient

Blue Button Plus (BB+) and direct secure email technologies could put patients in control of all use and disclosure of their electronic health records. BB+ lets us ‘view, download, and transmit’ our own health data to physicians, researchers, or anyone we choose.

But state Health Information Exchanges (HIEs) don’t allow patients to control the disclosure of personal health data. Some state HIEs don’t even ask consent; the HIE collects and shares everyone’s health records and no one can opt-out. Most state HIEs ask patients to grant thousands of strangers—employees of hospitals, doctors, pharmacies, labs, data clearinghouses, and health insurers—complete access to their electronic health records.

When corporations, government, and HIEs prevent patients from controlling who sees personal health data– from prescriptions, to DNA, to diagnoses– millions of people every year avoid or delay treatment, or hide information.

HIEs that open the door to even more hidden uses of health data will drive even more patients to avoid treatment, rather than share information that won’t be private.

Health IT systems that harm millions/year must be fixed. Technology can put us in control of our data, achieve the benefits and innovations we expect, and prevent harms.  We have to change US law to require technologies that put patients in control of their electronic health records.

theDataMap™

theDataMap™ is an online portal for documenting flows of personal data. The goal is to produce a detailed description of personal data flows in the United States.

A comprehensive data map will encourage new uses of personal data, help innovators find new data sources, and educate the public and inform policy makers on data sharing practices so society can act responsibly to reap benefits from sharing while addressing risks for harm. To accomplish this goal, the portal engages members of the public in a game-like environment to report and vet reports of personal data sharing. More…

Members of the public sign-up to be Data Detectives and then work with other Data Detectives to report and vet data sharing arrangements found on the Internet. Data Detectives are responsible for content on theDataMap™.

See the debut of theDataMap™ from the “Celebration of Privacy” during the 2nd International Summit on the Future of Health Privacy here:

Sizing Up De-Identification Guidance, Experts Analyze HIPAA Compliance Report (quotes PPR)

To view the full article by Marianne Kolbasuk McGee, please visit: Sizing Up De-Identification Guidance, Experts Analyze HIPAA Compliance Report.

The federal Office of Civil Rights (OCR), charged with protecting the privacy of nation’s health data, released a ‘guidance’ for “de-identifying” health data. Government agencies and corporations want to “de-identify”, release and sell health data for many uses. There are no penalties for not following the ‘guidance’.

Releasing large data bases with “de-identified” health data on thousands or millions of people could enable break-through research to improve health, lower costs, and improve quality of care—-IF “de-identification” actually protected our privacy, so no one knows it’s our personal data—-but it doesn’t.

The ‘guidance’ allows easy ‘re-identification’ of health data. Publically available data bases of other personal information can be quickly compared electronically with ‘de-identified’ health data bases, so can be names re-attached, creating valuable, identifiable health data sets.

The “de-identification” methods OCR proposed are:

  • -The HIPAA “Safe-Harbor” method:  if 18 specific identifiers are removed (such as name, address, age, etc, etc), data can be released without patient consent. But .04% of the data can still be ‘re-identified’
  • -Certification by a statistical  “expert” that the re-identification risk is “small” allows release of data bases without patient consent.

o   There are no requirements to be an “expert”

o   There is no definition of “small risk”

Inadequate “de-identification” of health data makes it a big target for re-identification. Health data is so valuable because it can be used for job and credit discrimination and for targeted product marketing of drugs and expensive treatment. The collection and sale of intimately detailed profiles of every person in the US is a major model for online businesses.

The OCR guidance ignores computer science, which has demonstrated ‘de-identification’ methods can’t prevent re-identification. No single method or approach can work because more and more ‘personally identifiable information’ is becoming publically available, making it easier and easier to re-identify health data.  See: the “Myths and Fallacies of “Personally Identifiable Information” by Narayanan and Shmatikov,  June 2010 at: http://www.cs.utexas.edu/~shmat/shmat_cacm10.pdf Key quotes from the article:

  • -“Powerful re-identification algorithms demonstrate not just a flaw in a specific anonymization technique(s), but the fundamental inadequacy of the entire privacy protection paradigm based on “de-identifying” the data.”
  • -“Any information that distinguishes one person from another can be used for re-identifying data.”
  • -“Privacy protection has to be built and reasoned about on a case-by-case basis.”

OCR should have recommended what Shmatikov and Narayanan proposed:  case-by-case ‘adversarial testing’ by comparing a “de-identified” health data base to multiple publically available data bases to determine which data fields must be removed to prevent re-identification. See PPR’s paper on “adversarial testing” at: http://patientprivacyrights.org/wp-content/uploads/2010/10/ABlumberg-anonymization-memo.pdf

Simplest, cheapest, and best of all would be to use the stimulus billions to build electronic systems so patients can electronically consent to data use for research and other uses they approve of.  Complex, expensive contracts and difficult ‘work-arounds’ (like ‘adversarial testing’) are needed to protect patient privacy because institutions, not patients, control who can use health data. This is not what the public expects and prevents us from exercising our individual rights to decide who can see and use personal health information.