NHS England patient data ‘uploaded to Google servers’, full disclosure demanded

The UK government has been debating illegal disclosures of patient health data: “The issue of which organisations have acquired medical records has been at the centre of political debate in the past few weeks, following reports that actuaries, pharmaceutical firms, government departments and private health providers had either attempted or obtained patient data.”

The article closes with quotes from Phil Booth of medConfidential:

  • “Every day another instance of whole population level data being sold emerges which had been previously denied”.
  • “There is no way for the public to tell that this data has left the HSCIC. The government and NHS England must now come completely clean. Anything less than full disclosure would be a complete betrayal of trust.”

Far worse privacy violations are the norm in the US, yet our government won’t acknowledge that US health IT systems enable hidden sales and sharing of patients’ health data.  US patients are prevented from controlling who sees their health records and can’t obtain real-time lists of who has seen and used personal health data.

Learn how the data broker industry violates Americans’ strong rights to control the use of personal health information in IMS Health Holdings’ SEC filing for an IPO:

  • IMS buys and aggregates sensitive “prescription and promotional” records, “electronic medical records,” “claims data,” “social media” and more to create “comprehensive,” “longitudinal” health records on “400 million” patients.
  • All purchases and subsequent sales of personal health records are hidden from patients.  Patients are not asked for informed consent or given meaningful notice.
  • IMS Health Holdings sells health data to “5,000 clients,” including the US Government.
  • IMS buys “proprietary data sourced from over 100,000 data suppliers covering over 780,000 data feeds globally.”

Data brokers claim they don’t violate our rights to health information privacy because our data are “de-identified” or “anonymized”—-but computer scientists have proven it’s easy to re-identify aggregated, longitudinal data sets:

deb

This blog was written in response to the following article: NHS England patient data ‘uploaded to Google servers’, Tory MP says

NHS legally barred from selling patient data for commercial use. When will the US wake up?

When will US bar sale of patient data for commercial use?

1st: Public has to wake up.

2nd: The LIE of sale of patient data for research must be exposed.

US law permits any corporation to buy/sell/sell/share patient data for commerce (i.e. BIG DATA analytics and proprietary products without patient consent or knowledge). This is a fact.

deb

This blog was written in response to the following article: NHS legally barred from selling patient data for commercial use

What You Need to Know About Patient Matching and Your Privacy and What You Can Do About It

Today, ONC released a report on patient matching practices and to the casual reader it will look like a byzantine subject. It’s not.

You should care about patient matching, and you will.

It impacts your ability to coordinate care, purchase life and disability insurance, and maybe even your job. Through ID theft, it also impacts your safety and security. Patient matching’s most significant impact, however, could be to your pocketbook as it’s being used to fix prices and reduce competition in a high deductible insurance system that makes families subject up to $12,700 of out-of-pocket expenses every year.

Patient matching is the healthcare cousin of NSA surveillance.

Health IT’s watershed is when people finally realize that hospital privacy and security practices are unfair and we begin to demand consent, data minimization and transparency for our most intimate information. The practices suggested by Patient Privacy Rights are relatively simple and obvious and will be discussed toward the end of this article.

Health IT tries to be different from other IT sectors. There are many reasons for this, few of them are good reasons. Health IT practices are dictated by HIPAA, where the rest of IT is either FTC or the Fair Credit Reporting Act. Healthcare is mostly paid by third-party insurance and so the risks of fraud are different than in traditional markets.

Healthcare is delivered by strictly licensed professionals regulated differently than the institutions that purchase the Health IT. These are the major reasons for healthcare IT exceptionalism but they are not a good excuse for bad privacy and security practices, so this is about to change.

Health IT privacy and security are in tatters, and nowhere is it more evident than the “patient matching” discussion. Although HIPAA has some significant security features, it also eliminated a patient’s right to consent and Fair Information Practice.

Patient matching by all sorts of health information aggregators and health information exchanges is involuntary and hidden from the patient as much as NSA surveillance is.

Patients don’t have any idea of how many databases are tracking our every healthcare action. We have no equivalent to the Fair Credit Reporting Act to cover these database operators. The databases are both public and private. The public ones are called Health Information Exchanges, All Payer Claims Databases, Prescription Drug Monitoring Programs, Mental Health Registries, Medicaid, and more.

The private ones are called “analytics” and sell $Billions of our aggregated data to hospitals eager to improve their margins, if not their mission.

The ONC report overlooks the obvious issue of FAIRNESS to the patient. The core of Fair Information Practice are Consent, Minimization and Transparency. The current report ignores all of these issues:

- Consent is not asked. By definition, patient matching is required for information sharing. Patient matching without patient consent leads to sharing of PHI without patient consent. The Consent form that is being used to authorize patient matching must list the actual parameters that will be used for the match. Today’s generic Notice of Privacy Practices are as inadequate as signing a blank check.

- Data is not minimized. Citizen matching outside of the health sector is usually based on a unique and well understood identifier such as a phone number, email, or SSN. To the extent that the report does not allow patients to specify their own matching criterion, a lot of extra private data is being shared for patient matching purposes. This violates data minimization.

- Transparency is absent. The patient is not notified when they are matched. This violates the most basic principles of error management and security. In banking or online services, it is routine to get a simple email or a call when a security-sensitive transaction is made.

This must be required of all patient matching in healthcare. In addition, patients are not given access to the matching database. This elementary degree of transparency for credit bureaus that match citizens is law under the Fair Credit Reporting Act and should be at least as strict in health care.

These elementary features of any EHR and any exchange are the watershed defining patient-centered health IT. If a sense of privacy and trust don’t push our service providers to treat patients as first-class users, then the global need for improved cybersecurity will have to drive the shift. Healthcare is critical infrastructure just as much as food and energy.

But what can you, as a patient. do to hasten your emancipation? I would start with this simple checklist:

Opt-out of sharing your health records unless the system offers:

  • Direct secure messaging with patients
  • Plain email or text notification of records matching
  • Patient-specified Direct email as match criterion
  • Your specific matching identifiers displayed on all consent forms
  • Online patient access to matchers and other aggregator databases

None of these five requirements are too hard. Google, Apple and your bank have done all of these things for years. The time has come for healthcare to follow suit.

Adrian Gropper, MD is Chief Technical Officer of Patient Privacy Rights and participates in Blue Button+, Direct secure messaging governance efforts and the evolution of patient-directed health information exchange.

Check out the Latest from Dr. Gropper, courtesy of The Healthcare Blog.

Did Tim Armstrong’s ‘Distressed Babies’ Comment Violate HIPAA Privacy Laws?

US citizens have a fundamental Constitutional right to health information privacy—but can’t easily sue. Only federal employees can sue under the Privacy Act of 1974, as vets did when a laptop with millions of health records was stolen. Even with strong state health privacy laws and state constitutional rights to privacy in place, it’s very hard to sue because most courts demand proof of monetary harm. This new digital disaster: exposing and/or selling sensitive personal health data–can’t be stopped without stronger, clearer federal laws. OR if US citizens boycott the corporations that violate their rights to health privacy.

-Deb

This blog written in response to the following article:

Did Tim Armstrong’s ‘Distressed Babies’ Comment Violate HIPAA Privacy Laws?
By Abby Ohlheiser
The Wire, February 10, 2014

Guest Blog – The AOL Babies: Our Healthcare Crisis in a Nut

Check out the latest from Nic Terry, courtesy of HealthLawProf Blog.

Where does one start with AOL CEO Armstrong’s ridiculous and unfeeling justifications for changes in his company’s 401(k) plan. Cable TV and Twitter came out of the blocks fast with the obvious critiques. And the outrage only increased after novelist Deanna Fei took to Slate to identify her daughter as one of the subjects of Armstrong’s implied criticism. Armstrong has now apologized and reversed his earlier decision.

As the corporate spin doctors contain the damage, Armstrong’s statements likely will recede from memory, although I am still hoping The Onion will memorialize Armstrong’s entry into the healthcare debate (suggested headline, “CEO Discovers Nation’s Healthcare Crisis Caused by 25 Ounce Baby”). But supposing (just supposing) your health law students ask about the story in class this week. What sort of journey can you take them on?

First (but only if you are feeling particularly mean), you could start with HIPAA privacy. After all, intuitively it seemed strange to hear an employer publicly describing the serious health problems of employees’ family members. With luck your students will volunteer that the HIPAA Privacy Rule does not apply to employers (not “covered entities”). True, but AOL provided employees and their families with a health plan. Assume this was an employer-sponsored plan of some scale. It remains the case that the plan and not the employer is subject to the Privacy Rule, although following the Omnibus rule, the plan and its business associates are going to face increased regulation (such as breach notification, new privacy notices, etc). The employer’s responsibilities are to be found at 45 CFR 164.504 and primarily 164.504(f) (and here we descend deep into the HIPAA weeds). The employer must ensure that the plan sets out the plan members’ privacy rights viz-a-viz the employer. For plans like these the employer can be passed somewhat deindentied summary information (though for very limited purposes that don’t seem to include TV appearances). However, if the employer essentially administers the plan then things get more complicated. Firewalls are required between different groups of employees and employer-use of PHI is severely limited. By the way, and in fairness to Mr Armstrong, there are many things we don’t know about the AOL health plan, the source of his information about the “distressed babies,” whether any PHI had been deidentified, etc. Yet, at the very least AOL may have opened themselves up to the OCR asking similar questions and starting an investigation into how AOL treats enrollee information.

Second, this storm about the babies’ health insurance should provide a good basis for discussion of the various types of health insurance and their differential treatment by the Affordable Care Act. A large company likely will offer either a fully-insured or self-insured plan to its employees. If the latter, would your students have recommended reinsurance against claim “spikes” with a stop-loss policy? ACA should have relatively little impact on such plans or their cost except where the plans fall beneath the essential benefits floor. Contrast such plans with those traditionally offered on the individual market that are now being replaced with the lower cost (subject again to extra costs associated with essential benefits) health exchange-offered plans.

Third, this entire episode raises the question of health care costs and, specifically, the pricing of health care. On first hearing a million dollar price tag seems extraordinary. Yet as Ms. Fei noted in her Slate article, her daughter spent three months in a neonatal ICU and endured innumerable procedures and tests resulting in “a 3-inch thick folder of hospital bills that range from a few dollars and cents to the high six figures.” Now, the ACA may be criticized for not doing enough to cut costs (how about a quick pop quiz on what it does try to do?), but is there any truth to the argument that it raises health care costs? Recent investigative work by Steve Brill and fine scholarship by Erin Fuse Brown have highlighted both high prices and high differential pricing in health care. So why would a corporate executive (either directly or indirectly) blame high prices on the ACA? Are, for example, technology markets so different that the reasons for health care costs are under appreciated? And by extension, instead of fighting the ACA why are corporate CEOs not urging a second round of legislation aimed specifically at reducing the cost of healthcare for all? After all it is highly unlikley FFS pricing would be tolerated in their non-health domains. Or does such a group prefer the status quo and what Beatrix Hoffman critically terms as rationing by price?

New CLIA rule talks the talk, but it doesn’t walk the walk

Deborah Peel, MD, Founder and Chair of Patient Privacy Rights

The federal government released an update to the CLIA rule this week that will require all labs to send test results directly to patients. But the regulations fail to achieve the stated intent to help patients. The rule allows labs to delay patient access to test results up to 30 days, and the process for directly obtaining personal test results from labs is not automated.

The new rule also fails to help patients in significant ways:

  • Real-time, online test results are not required. The federal government should have required all labs to use technology that benefits patients by enabling easy, automatic access to test results via the Internet in real-time. Unless we can obtain real-time access to test results, we can’t get a timely second opinion or verify the appropriate tests were ordered at the right time for our symptoms and diseases.
  • Labs are allowed to charge fees for providing test results to patients.  If labs can charge fees, they will not automate the process for patients to obtain results. Labs that automate patient access to test results online would incur a one-time cost.  After labs automate the process, human ‘work’ or time is no longer needed to provide patients their test results, so the labs would have no ongoing costs to recoup from patients.
  • Labs should be banned from selling, sharing, or disclosing patient test results without meaningful informed consent to anyone, except the physician who ordered the tests. This unfair and deceptive trade practice should be stopped. No patient expects labs to sell or share their test results with any other person or company except the physician who ordered the test(s).

This rule raises a question: why do so many federal rules for improving the healthcare system fail to require technologies that benefit patients?

Technology could provide enormous benefits to patients, but the US government caters to the healthcare and technology industries, instead of protecting patients.

Current US health IT systems actually facilitate the exploitation of patients’ records via technology. When HHS eliminated patient control over personal health data from HIPAA in 2002, it created a massive hidden US data broker industry that sells, shares , aggregates and discloses longitudinal patient profiles (for an example, see IMS’ SEC filing with details about selling 400M longitudinal patient profiles to 5K clients, including the U.S. government.

Meanwhile, even the most mundane, annoying, repetitive tasks patients must perform today–like filling out new paper forms with personal information every time we visit a doctor–are not automated for our convenience or to improve data quality and accuracy.

Shouldn’t IT improve patients’ experiences, treatment, and restore personal control over sensitive health information?

deb

You can also view a copy of this blog post here

Guest Article: Can You Ever Opt Out from Data Brokers?

Check out the latest from Debra Diener, courtesy of Privacy Made Simple.

Consumers may wonder how it is that they get ads, emails and other information from companies with whom they have had no interaction on or off-line.  Maybe they’re particularly confused if they’ve set their privacy settings to block cookies and other tracking devices.

The reality is that data brokers gather, compile and then sell lists of personal information to companies.  So what can consumers do if they want to try and protect their information from being compiled and sold by data brokers?  The answer is “it’s not easy” especially given the numbers of data brokers and the range of information they collect.

Julia Angwin has written a newly published book, Dragnet Nation, that focuses, in part, on her efforts to identify data brokers and then get the information that brokers have about her.  I plan on reading her book as I heard her discuss it recently and have just read her January 30th article, “Privacy Tools: Opting Out from Data Brokers” posted on ProPublica (www.propublica.org).

Her ProPublica article summarizes the steps required by some of the data brokers in order for her to opt-out of information collection.  As Ms. Angwin writes, there’s no law requiring data brokers to offer consumers that option.  She very helpfully attaches two spreadsheets to her article with the names of companies tracking information along with links to their privacy pages and, for those data brokers offering an opt-out, the instructions for doing so.  As she writes, many of the data brokers require consumers who want to opt-out to provide personal  information and identification (e.g., driver’s license).

Ms. Angwin’s spreadsheets of 212 data brokers provides consumers with a very useful resource.  She is also very candid in describing the difficulties in finding her own information and what she calls “some minor successes” in finding data brokers who had her information and opting-out.

Guest Article: The Causes of Digital Patient Privacy Loss in EHRs and Other Health IT Systems

Check out the latest from Shahid Shah, courtesy of The Healthcare IT Guy.

This past Friday I was invited by the Patient Privacy Rights (PPR) Foundation to lead a discussion about privacy and EHRs. The discussion, entitled “Fact vs. Fiction: Best Privacy Practices for EHRs in the Cloud,” addressed patient privacy concerns and potential solutions for doctors working with EHRs.

While we are all somewhat disturbed by the slow erosion of privacy in all aspects of our digital lives, the rather rapid loss of patient privacy around health data is especially unnerving because healthcare is so near and dear to us all. In order to make sure we provided some actionable intelligence during the PPR discussion, I started the talk off giving some of the reasons why we’re losing patient privacy in the hopes that it might foster innovators to think about ways of slowing down inevitable losses.

Here are some of the causes I mentioned on Friday, not in any particular order:

  • Most patients, even technically astute ones, don’t really understand the concept of digital privacy. Digital is a “cyber world” and not easy to picture so patients believe their data and privacy is protected when it may not be. I usually explain patient privacy in the digital world to non-techies using the analogy of curtains, doors, and windows. The digital health IT world of today is like walking into a patient’s room in a hospital in which it’s a large shared space with no curtains, no walls, no doors, etc. (even for bathrooms or showers!). In this imaginary world, every private conversation occurs so that others can hear it, all procedures are performed in front of others, etc. without the patient’s consent and their objections don’t even matter. If they can imagine that scenario, then patients will probably have a good idea about how digital privacy is conducted today — a big shared room where everyone sees and hears everything even over patients’ objections.
  • It’s faster and easier to create non-privacy-aware IT solutions than privacy-aware ones. Having built dozens of HIPAA-compliant and highly secure enterprise health IT systems for decades, my anecdotal experience is that when it comes to features and functions vs. privacy, features win. Product designers, architects, and engineers talk the talk but given the difficulties of creating viable systems in a coordinated, integrated digital ecosystem it’s really hard to walk the privacy walk  Because digital privacy is so hard to describe even in simple single enterprise systems, the difficulty of describing and defining it across multiple integrated systems is often the reason for poor privacy features in modern systems.
  • It’s less expensive to create non-privacy-aware IT solutions. Because designing privacy into the software from the beginning is hard and requires expensive security resources to do so, we often see developers wait until the end of the process to consider privacy. Privacy can no more be added on top of an existing system than security can — either it’s built into the functionality or it’s just going to be missing. Because it’s cheaper to leave it out, it’s often left out.
  • The government is incentivizing and certifying functionality over privacy and security. All the meaningful use certification and testing steps are focused too much on prescribed functionality and not enough on data-centric privacy capabilities such as notifications, disclosure tracking, and compartmentalization. If privacy was important in EHRs then the NIST test plans would cover that. Privacy is difficult to define and even more difficult to implement so the testing process doesn’t focus on it at this time.
  • Business models that favor privacy loss tend to be more profitable. Data aggregation and homogenization, resale, secondary use, and related business models tend to be quite profitable. The only way they will remain profitable is to have easy and unfettered (low friction) ways of sharing and aggregating data. Because enhanced privacy through opt-in processes, disclosures, and notifications would end up reducing data sharing and potentially reducing revenues and profit, we see that privacy loss is going to happen with inevitable rise of EHRs.
  • Patients don’t really demand privacy from their providers or IT solutions in the same way they demand other things. We like to think that all patients demand digital privacy for their data. However, it’s rare for patients to choose physicians, health systems, or other care providers based on their privacy views. Even when privacy violations are found and punished, it’s uncommon for patients to switch to other providers.
  • Regulations like HIPAA have made is easy for privacy loss to occur. HIPAA has probably done more to harm privacy over the past decade than any other government regulations. More on this in a later post.

The only way to improve privacy across the digital spectrum is to realize that health providers need to conduct business in a tricky intermediary-driven health system with sometimes conflicting business goals like reduction of medical errors or lower cost (which can only come with more data sharing, not less). Digital patient privacy is important but there are many valid reasons why privacy is either hard or impossible to achieve in today’s environment. Unless we intelligently and honestly understand why we lose patient privacy we can’t really create novel and unique solutions to help curb the loss.

What do you think? What other causes of digital patient privacy loss would you add to my list above?

Courtesy of The Healthcare IT Guy.

The Biggest Data Myths of 2013

The biggest myth about “Big Data” users of the entire nation’s health information is that personal health data was acquired legally and ethically.

Just ask anyone you know if they ever agreed to the hidden use and sale of sensitive personal information about their minds and bodies by corporations or “research” businesses for analytics, sales, research or any other use. The answer is “no.”

Americans have very strong individual rights to health information privacy, i.e., to control the use of their most sensitive personal information. If US citizens have any “right to privacy,” that right has always applied to sensitive personal health information. This was very clear for our paper medical records and is embodied in the Hippocratic Oath as the requirement to obtain informed consent before disclosing patient information (with rare exceptions).

The IPO filing by IMS Health Holdings at the SEC exposed the vast number of hidden health data sellers and buyers. Buying, aggregating, and selling the nation’s health data is an “unfair and deceptive” trade practice. (Read more of Dr. Peel’s comments on the IMS filing here.)

Does the public know or expect that IMS (and the 100’s of thousands of other hidden health data mining companies) buys and aggregates sensitive “prescription and promotional” records, “electronic medical records,” “claims data,” and “social media” to create “comprehensive,” “longitudinal” health records on “400 million” patients? Or that IMS buys “proprietary data sourced from over 100,000 data suppliers covering over 780,000 data feeds globally”? Again, the answer is “no.”

Given the massive hidden theft, sale, and misuse of the nation’s health information how can any physician, hospital, or health data holder represent that our personal health data is private, secure, or confidential?

deb

IMS Health Files for IPO – Is It Legal?

On January 2nd, IMS Health Holdings announced it will sell stock on the New York Stock Exchange. IMS joins other major NYSE-listed corporations that derive significant revenue from selling sensitive personal health data, including General Electric, IBM, United Health Group, CVS Caremark, Medco Health Solutions, Express Scripts, and Quest Diagnostics.

  • IMS buys and aggregates sensitive “prescription and promotional” records, “electronic medical records,” “claims data,” “social media” and more to create “comprehensive,” “longitudinal” health records on “400 million” patients.
  • All purchases and subsequent sales of personal health records are hidden from patients.  Patients are not asked for informed consent or given meaningful notice.
  • IMS Health Holdings sells health data to “5,000 clients,” including the US Government.
  • Despite claims that the data sold is “anonymous”, computer science has long established that re-identification is easy.
  • See brief 3-page paper by Narayanan and Shmatikov at: http://www.cs.utexas.edu/~shmat/shmat_cacm10.pdf)
  • See Prof. Sweeney’s paper on re-identifying patient data sold by states like WA at: http://thedatamap.org/risks.html
  • “Our solutions, which are designed to provide our clients access to our deep healthcare-specific subject matter expertise, take various forms, including information, tailored analytics, subscription software and expert services.” (from IMS Health Holding’s SEC filing)

 

Quotes from IMS Health Holding’s SEC filing:   “We have one of the largest and most comprehensive collections of healthcare information in the world, spanning sales, prescription and promotional data, medical claims, electronic medical records and social media. Our scaled and growing data set, containing over 10 petabytes of unique data, includes over 85% of the world’s prescriptions by sales revenue and approximately 400 million comprehensive, longitudinal, anonymous patient records.”   IMS buys “proprietary data sourced from over 100,000 data suppliers covering over 780,000 data feeds globally.”

How can this business model be legal?  How can companies decide that US citizens’ personal health data is “proprietary data,” a corporate asset, and sell it?  If personal health data ‘belongs’ to anyone, surely it belongs to the individual, not to any corporation that handles, stores, or transmits that information.

Americans’ strongest rights to control personal information are our rights to control personal health information. We have constitutional rights to health information privacy which are not trumped by the 2001 elimination of the right of consent from HIPAA (see: http://patientprivacyrights.org/truth-hipaa/ ). HIPAA is the “floor” for privacy rights, not the ceiling. Strong state and federal laws, and medical ethics require consent before patient data is used or disclosed. 10 state constitutions grant residents a right to privacy, and other states constitutions have been interpreted as giving residents a right to privacy (like TX).

Surely FTC would regard the statement filed with the SEC as evidence of unfair and deceptive trade practices. US patients’ health data is being unfairly and deceptively bought and sold.  Can the SEC deny IMS Health the opportunity to offer an IPO, since its business model is predicated on hidden purchase and sale of Americans’ personal health data?

If we can’t control the use and sale of our most sensitive personal information, data about our minds and bodies, isn’t our right to privacy worthless?

deb

To view the full article published in Modern Healthcare visit:  IMS Health Files for IPO