What does autonomy mean?

Written in response to the following article:


When it comes to clinical-decision software, what does autonomy mean?

 

The current rage for building , selling, and using “bedside analytics” and Big Data technologies is all about financial gain for investors and corporations, it is not about the best interests of the public.
 
Whatever advances the Kaiser technology offers are all proprietary and hidden. Nothing is tested/vetted scientifically by other academic experts. I’ve written before about Mayo’s Bedside Analytics—it’s the exact same thing—all secret.
 
The privatization of the science of Medicine and the privatization of research in Medicine is an incredibly bad development for the people of the US. This is not the case in Europe or other Western nations. Only in the US does money trump science and the greater good.
 
In the past, US Medicine advanced via science: research data and results/conclusions were always openly shared, vetted, and tested by other researchers. Advances in knowledge were always shared for the greater good.
 
The practices of Medicine and Nursing in the US used to be “professions” with ethics that required physicians and nurses to put patients’ needs and interests first, ahead of their own personal interest. The underlying idea was that advances in Medicine belong in the public domain. Suppose Salk and Sabin had privatized their research and the resulting injected and oral polio vaccinations?  If profit had always been the motive of Medicine and Nursing, would these professions have ever gained the public’s trust?
 
Now the National Nurses United opposes the use of these technologies. BRAVO!!!!
 
Where is the comparable response from any medical professional organization? So far, there is none.
 
deb

 

We still can’t get copies of our health records

It’s been 13 years since the HIPAA Privacy Rule required hospitals and physicians to give patients copies of their electronic health records. We still can’t get them.
 
The healthcare industry is stonewalling: most people can’t get electronic copies of their health data at all, even if they jump through labor-intensive paper-based processes. How ironic: to get copies of personal health data we have to sign primitive, blanket paper consent forms.  Unlike every other business in the Digital Age, the healthcare and health IT industries have no interest in building direct online relationships with their customers, i.e., patients.
 
Why are we being stonewalled? There are several answers:
·         Personal health data is the most valuable personal information in the Digital Age. The healthcare and health technology industries don’t want to give us our own data, they think it belongs to them.
·         The change in HIPAA legalized the US health data broker industry.
o   Health data holders, users, and analyzers now control what, when, and to whom our sensitive personal health data is disclosed, sold, or traded.
o   Virtually all companies that hold, process, or analyze our personal health information believe it belongs to them; including hospitals, electronic health records systems, pharmacies, health technology companies, mobile health and fitness apps, labs, health insurers.
·         Most people have no idea that the US has the world’s largest hidden health data broker industry.  This industry is illegal in Europe.
·         Few physicians or pharmacists know that patient data is being sold, ask and they will tell you HIPAA protects the privacy of your data—but it doesn’t.
 
Deb

 

Tech Groups Press Again On ECPA Reform : Support Email Privacy

Patients need and want to use secure, encrypted email to communicate with health professionals. Why should the government be able to look at our email without a warrant?

The 1986 Electronic Communications Privacy Act (ECPA) must be updated to stop the government from reading our email without approval from a judge.

From the letter to President Obama signed by 81 groups, including Patient Privacy Rights, that asked him to champion fixing the ECPA:

  • “We write today to urge you to support reform of the Electronic Communications Privacy Act (ECPA) to guarantee that every American has full constitutional and statutory protections for the emails, photos, text messages, and other documents that they send and share online.”

“A warrant based on the probable cause standard is required for searches of U.S. mail, searches of a home, or even electronic communications that are not stored with companies like Google or Yahoo.” The same protections are just as important for email between doctors and patients!

Support for “email privacy” is bipartisan, see:  #ECPAReform http://bit.ly/1rAW7MY

Join us in telling the President to pursue #ECPAReform www.NotWithoutaWarrant.com http://bit.ly/1rAW7MY

URL for POLITICO article:  http://www.politico.com/morningtech/0414/morningtech13755.html

POLITICO Morning Tech:  FIRST LOOK: TECH GROUPS PRESS AGAIN ON ECPA REFORM — A gaggle of tech advocacy and industry groups are again imploring the White House to put their weight behind email privacy reform, and this time making clear that any loopholes for civil agencies would be a nonstarter. The groups, led chiefly by the Digital 4th and Digital Due Process coalitions, have been ramping up their ECPA reform push in the hopes of convincing Washington to tackle an issue that they see as low-hanging fruit. In a letter to President Obama today, they want the White House to know that they won’t support any warrant requirement carve-out for federal agencies like the Securities and Exchange Commission. “Seemingly, the only major impediment to passage is an objection by administrative agencies like the Securities and Exchange Commission, which would like to gut the legislation as a way to expand their investigative authorities,” write the groups, which include TechNet, Reddit, the Electronic Frontier Foundation and the ACLU. “Such an agency carve out would be a major blow to reform efforts, allowing increased government access to our communications during the many civil investigations conducted by federal and state agencies.” Full letter here: http://bit.ly/1kfKrfX

 

deb

 

NHS England patient data ‘uploaded to Google servers’, full disclosure demanded

The UK government has been debating illegal disclosures of patient health data: “The issue of which organisations have acquired medical records has been at the centre of political debate in the past few weeks, following reports that actuaries, pharmaceutical firms, government departments and private health providers had either attempted or obtained patient data.”

The article closes with quotes from Phil Booth of medConfidential:

  • “Every day another instance of whole population level data being sold emerges which had been previously denied”.
  • “There is no way for the public to tell that this data has left the HSCIC. The government and NHS England must now come completely clean. Anything less than full disclosure would be a complete betrayal of trust.”

Far worse privacy violations are the norm in the US, yet our government won’t acknowledge that US health IT systems enable hidden sales and sharing of patients’ health data.  US patients are prevented from controlling who sees their health records and can’t obtain real-time lists of who has seen and used personal health data.

Learn how the data broker industry violates Americans’ strong rights to control the use of personal health information in IMS Health Holdings’ SEC filing for an IPO:

  • IMS buys and aggregates sensitive “prescription and promotional” records, “electronic medical records,” “claims data,” “social media” and more to create “comprehensive,” “longitudinal” health records on “400 million” patients.
  • All purchases and subsequent sales of personal health records are hidden from patients.  Patients are not asked for informed consent or given meaningful notice.
  • IMS Health Holdings sells health data to “5,000 clients,” including the US Government.
  • IMS buys “proprietary data sourced from over 100,000 data suppliers covering over 780,000 data feeds globally.”

Data brokers claim they don’t violate our rights to health information privacy because our data are “de-identified” or “anonymized”—-but computer scientists have proven it’s easy to re-identify aggregated, longitudinal data sets:

deb

This blog was written in response to the following article: NHS England patient data ‘uploaded to Google servers’, Tory MP says

NHS legally barred from selling patient data for commercial use. When will the US wake up?

When will US bar sale of patient data for commercial use?

1st: Public has to wake up.

2nd: The LIE of sale of patient data for research must be exposed.

US law permits any corporation to buy/sell/sell/share patient data for commerce (i.e. BIG DATA analytics and proprietary products without patient consent or knowledge). This is a fact.

deb

This blog was written in response to the following article: NHS legally barred from selling patient data for commercial use

What You Need to Know About Patient Matching and Your Privacy and What You Can Do About It

Today, ONC released a report on patient matching practices and to the casual reader it will look like a byzantine subject. It’s not.

You should care about patient matching, and you will.

It impacts your ability to coordinate care, purchase life and disability insurance, and maybe even your job. Through ID theft, it also impacts your safety and security. Patient matching’s most significant impact, however, could be to your pocketbook as it’s being used to fix prices and reduce competition in a high deductible insurance system that makes families subject up to $12,700 of out-of-pocket expenses every year.

Patient matching is the healthcare cousin of NSA surveillance.

Health IT’s watershed is when people finally realize that hospital privacy and security practices are unfair and we begin to demand consent, data minimization and transparency for our most intimate information. The practices suggested by Patient Privacy Rights are relatively simple and obvious and will be discussed toward the end of this article.

Health IT tries to be different from other IT sectors. There are many reasons for this, few of them are good reasons. Health IT practices are dictated by HIPAA, where the rest of IT is either FTC or the Fair Credit Reporting Act. Healthcare is mostly paid by third-party insurance and so the risks of fraud are different than in traditional markets.

Healthcare is delivered by strictly licensed professionals regulated differently than the institutions that purchase the Health IT. These are the major reasons for healthcare IT exceptionalism but they are not a good excuse for bad privacy and security practices, so this is about to change.

Health IT privacy and security are in tatters, and nowhere is it more evident than the “patient matching” discussion. Although HIPAA has some significant security features, it also eliminated a patient’s right to consent and Fair Information Practice.

Patient matching by all sorts of health information aggregators and health information exchanges is involuntary and hidden from the patient as much as NSA surveillance is.

Patients don’t have any idea of how many databases are tracking our every healthcare action. We have no equivalent to the Fair Credit Reporting Act to cover these database operators. The databases are both public and private. The public ones are called Health Information Exchanges, All Payer Claims Databases, Prescription Drug Monitoring Programs, Mental Health Registries, Medicaid, and more.

The private ones are called “analytics” and sell $Billions of our aggregated data to hospitals eager to improve their margins, if not their mission.

The ONC report overlooks the obvious issue of FAIRNESS to the patient. The core of Fair Information Practice are Consent, Minimization and Transparency. The current report ignores all of these issues:

- Consent is not asked. By definition, patient matching is required for information sharing. Patient matching without patient consent leads to sharing of PHI without patient consent. The Consent form that is being used to authorize patient matching must list the actual parameters that will be used for the match. Today’s generic Notice of Privacy Practices are as inadequate as signing a blank check.

- Data is not minimized. Citizen matching outside of the health sector is usually based on a unique and well understood identifier such as a phone number, email, or SSN. To the extent that the report does not allow patients to specify their own matching criterion, a lot of extra private data is being shared for patient matching purposes. This violates data minimization.

- Transparency is absent. The patient is not notified when they are matched. This violates the most basic principles of error management and security. In banking or online services, it is routine to get a simple email or a call when a security-sensitive transaction is made.

This must be required of all patient matching in healthcare. In addition, patients are not given access to the matching database. This elementary degree of transparency for credit bureaus that match citizens is law under the Fair Credit Reporting Act and should be at least as strict in health care.

These elementary features of any EHR and any exchange are the watershed defining patient-centered health IT. If a sense of privacy and trust don’t push our service providers to treat patients as first-class users, then the global need for improved cybersecurity will have to drive the shift. Healthcare is critical infrastructure just as much as food and energy.

But what can you, as a patient. do to hasten your emancipation? I would start with this simple checklist:

Opt-out of sharing your health records unless the system offers:

  • Direct secure messaging with patients
  • Plain email or text notification of records matching
  • Patient-specified Direct email as match criterion
  • Your specific matching identifiers displayed on all consent forms
  • Online patient access to matchers and other aggregator databases

None of these five requirements are too hard. Google, Apple and your bank have done all of these things for years. The time has come for healthcare to follow suit.

Adrian Gropper, MD is Chief Technical Officer of Patient Privacy Rights and participates in Blue Button+, Direct secure messaging governance efforts and the evolution of patient-directed health information exchange.

Check out the Latest from Dr. Gropper, courtesy of The Healthcare Blog.

Did Tim Armstrong’s ‘Distressed Babies’ Comment Violate HIPAA Privacy Laws?

US citizens have a fundamental Constitutional right to health information privacy—but can’t easily sue. Only federal employees can sue under the Privacy Act of 1974, as vets did when a laptop with millions of health records was stolen. Even with strong state health privacy laws and state constitutional rights to privacy in place, it’s very hard to sue because most courts demand proof of monetary harm. This new digital disaster: exposing and/or selling sensitive personal health data–can’t be stopped without stronger, clearer federal laws. OR if US citizens boycott the corporations that violate their rights to health privacy.

-Deb

This blog written in response to the following article:

Did Tim Armstrong’s ‘Distressed Babies’ Comment Violate HIPAA Privacy Laws?
By Abby Ohlheiser
The Wire, February 10, 2014

Guest Blog – The AOL Babies: Our Healthcare Crisis in a Nut

Check out the latest from Nic Terry, courtesy of HealthLawProf Blog.

Where does one start with AOL CEO Armstrong’s ridiculous and unfeeling justifications for changes in his company’s 401(k) plan. Cable TV and Twitter came out of the blocks fast with the obvious critiques. And the outrage only increased after novelist Deanna Fei took to Slate to identify her daughter as one of the subjects of Armstrong’s implied criticism. Armstrong has now apologized and reversed his earlier decision.

As the corporate spin doctors contain the damage, Armstrong’s statements likely will recede from memory, although I am still hoping The Onion will memorialize Armstrong’s entry into the healthcare debate (suggested headline, “CEO Discovers Nation’s Healthcare Crisis Caused by 25 Ounce Baby”). But supposing (just supposing) your health law students ask about the story in class this week. What sort of journey can you take them on?

First (but only if you are feeling particularly mean), you could start with HIPAA privacy. After all, intuitively it seemed strange to hear an employer publicly describing the serious health problems of employees’ family members. With luck your students will volunteer that the HIPAA Privacy Rule does not apply to employers (not “covered entities”). True, but AOL provided employees and their families with a health plan. Assume this was an employer-sponsored plan of some scale. It remains the case that the plan and not the employer is subject to the Privacy Rule, although following the Omnibus rule, the plan and its business associates are going to face increased regulation (such as breach notification, new privacy notices, etc). The employer’s responsibilities are to be found at 45 CFR 164.504 and primarily 164.504(f) (and here we descend deep into the HIPAA weeds). The employer must ensure that the plan sets out the plan members’ privacy rights viz-a-viz the employer. For plans like these the employer can be passed somewhat deindentied summary information (though for very limited purposes that don’t seem to include TV appearances). However, if the employer essentially administers the plan then things get more complicated. Firewalls are required between different groups of employees and employer-use of PHI is severely limited. By the way, and in fairness to Mr Armstrong, there are many things we don’t know about the AOL health plan, the source of his information about the “distressed babies,” whether any PHI had been deidentified, etc. Yet, at the very least AOL may have opened themselves up to the OCR asking similar questions and starting an investigation into how AOL treats enrollee information.

Second, this storm about the babies’ health insurance should provide a good basis for discussion of the various types of health insurance and their differential treatment by the Affordable Care Act. A large company likely will offer either a fully-insured or self-insured plan to its employees. If the latter, would your students have recommended reinsurance against claim “spikes” with a stop-loss policy? ACA should have relatively little impact on such plans or their cost except where the plans fall beneath the essential benefits floor. Contrast such plans with those traditionally offered on the individual market that are now being replaced with the lower cost (subject again to extra costs associated with essential benefits) health exchange-offered plans.

Third, this entire episode raises the question of health care costs and, specifically, the pricing of health care. On first hearing a million dollar price tag seems extraordinary. Yet as Ms. Fei noted in her Slate article, her daughter spent three months in a neonatal ICU and endured innumerable procedures and tests resulting in “a 3-inch thick folder of hospital bills that range from a few dollars and cents to the high six figures.” Now, the ACA may be criticized for not doing enough to cut costs (how about a quick pop quiz on what it does try to do?), but is there any truth to the argument that it raises health care costs? Recent investigative work by Steve Brill and fine scholarship by Erin Fuse Brown have highlighted both high prices and high differential pricing in health care. So why would a corporate executive (either directly or indirectly) blame high prices on the ACA? Are, for example, technology markets so different that the reasons for health care costs are under appreciated? And by extension, instead of fighting the ACA why are corporate CEOs not urging a second round of legislation aimed specifically at reducing the cost of healthcare for all? After all it is highly unlikley FFS pricing would be tolerated in their non-health domains. Or does such a group prefer the status quo and what Beatrix Hoffman critically terms as rationing by price?

New CLIA rule talks the talk, but it doesn’t walk the walk

Deborah Peel, MD, Founder and Chair of Patient Privacy Rights

The federal government released an update to the CLIA rule this week that will require all labs to send test results directly to patients. But the regulations fail to achieve the stated intent to help patients. The rule allows labs to delay patient access to test results up to 30 days, and the process for directly obtaining personal test results from labs is not automated.

The new rule also fails to help patients in significant ways:

  • Real-time, online test results are not required. The federal government should have required all labs to use technology that benefits patients by enabling easy, automatic access to test results via the Internet in real-time. Unless we can obtain real-time access to test results, we can’t get a timely second opinion or verify the appropriate tests were ordered at the right time for our symptoms and diseases.
  • Labs are allowed to charge fees for providing test results to patients.  If labs can charge fees, they will not automate the process for patients to obtain results. Labs that automate patient access to test results online would incur a one-time cost.  After labs automate the process, human ‘work’ or time is no longer needed to provide patients their test results, so the labs would have no ongoing costs to recoup from patients.
  • Labs should be banned from selling, sharing, or disclosing patient test results without meaningful informed consent to anyone, except the physician who ordered the tests. This unfair and deceptive trade practice should be stopped. No patient expects labs to sell or share their test results with any other person or company except the physician who ordered the test(s).

This rule raises a question: why do so many federal rules for improving the healthcare system fail to require technologies that benefit patients?

Technology could provide enormous benefits to patients, but the US government caters to the healthcare and technology industries, instead of protecting patients.

Current US health IT systems actually facilitate the exploitation of patients’ records via technology. When HHS eliminated patient control over personal health data from HIPAA in 2002, it created a massive hidden US data broker industry that sells, shares , aggregates and discloses longitudinal patient profiles (for an example, see IMS’ SEC filing with details about selling 400M longitudinal patient profiles to 5K clients, including the U.S. government.

Meanwhile, even the most mundane, annoying, repetitive tasks patients must perform today–like filling out new paper forms with personal information every time we visit a doctor–are not automated for our convenience or to improve data quality and accuracy.

Shouldn’t IT improve patients’ experiences, treatment, and restore personal control over sensitive health information?

deb

You can also view a copy of this blog post here

Guest Article: Can You Ever Opt Out from Data Brokers?

Check out the latest from Debra Diener, courtesy of Privacy Made Simple.

Consumers may wonder how it is that they get ads, emails and other information from companies with whom they have had no interaction on or off-line.  Maybe they’re particularly confused if they’ve set their privacy settings to block cookies and other tracking devices.

The reality is that data brokers gather, compile and then sell lists of personal information to companies.  So what can consumers do if they want to try and protect their information from being compiled and sold by data brokers?  The answer is “it’s not easy” especially given the numbers of data brokers and the range of information they collect.

Julia Angwin has written a newly published book, Dragnet Nation, that focuses, in part, on her efforts to identify data brokers and then get the information that brokers have about her.  I plan on reading her book as I heard her discuss it recently and have just read her January 30th article, “Privacy Tools: Opting Out from Data Brokers” posted on ProPublica (www.propublica.org).

Her ProPublica article summarizes the steps required by some of the data brokers in order for her to opt-out of information collection.  As Ms. Angwin writes, there’s no law requiring data brokers to offer consumers that option.  She very helpfully attaches two spreadsheets to her article with the names of companies tracking information along with links to their privacy pages and, for those data brokers offering an opt-out, the instructions for doing so.  As she writes, many of the data brokers require consumers who want to opt-out to provide personal  information and identification (e.g., driver’s license).

Ms. Angwin’s spreadsheets of 212 data brokers provides consumers with a very useful resource.  She is also very candid in describing the difficulties in finding her own information and what she calls “some minor successes” in finding data brokers who had her information and opting-out.