Revelations by AOL Boss Raise Fears Over Privacy

By Natasha Singer
NYTimes.com, February 10, 2014

Tim Armstrong, the chief executive of AOL, apologized last weekend for publicly revealing sensitive health care details about two employees to explain why the online media giant had decided to cut benefits. He even reinstated the benefits after a backlash.

Tim Armstrong, the chief executive of AOL, apologized last weekend for publicly revealing sensitive health care details about two employees to explain why the online media giant had decided to cut benefits. He even reinstated the benefits after a backlash.

But patient and work force experts say the gaffe could have a lasting impact on how comfortable — or discomfited — Americans feel about bosses’ data-mining their personal lives.

Mr. Armstrong made a seemingly offhand reference to “two AOL-ers that had distressed babies that were born that we paid a million dollars each to make sure those babies were O.K.” The comments, made in a conference call with employees, brought an immediate outcry, raising questions over corporate access to and handling of employees’ personal medical data.

“This example shows how easy it is for employers to find out if employees have a rare medical condition,” said Dr. Deborah C. Peel, founder of Patient Privacy Rights, a nonprofit group in Austin, Tex. She urged regulators to investigate Mr. Armstrong’s disclosure about the babies, saying “he completely outed these two families.”

To view the full article, please visit Revelations by AOL Boss Raise Fears Over Privacy

 

Company That Knows What Drugs Everyone Takes Going Public

Nearly every time you fill out a prescription, your pharmacy sells details of the transaction to outside companies which compile and analyze the information to resell to others. The data includes age and gender of the patient, the name, address and contact details of their doctor, and details about the prescription.

A 60-year-old company little known by the public, IMS Health, is leading the way in gathering this data. They say they have assembled “85% of the world’s prescriptions by sales revenue and approximately 400 million comprehensive, longitudinal, anonymous patient records.”

IMS Health sells data and reports to all the top 100 worldwide global pharmaceutical and biotechnology companies, as well as consulting firms, advertising agencies, government bodies and financial firms. In a January 2nd filing to the Security and Exchange Commission announcing an upcoming IPO, IMS said it processes data from more 45 billion healthcare transactions annually (more than six for each human on earth on average) and collects information from more than 780,000 different streams of data worldwide.

Deborah Peel, a Freudian psychoanalyst who founded Patient Privacy Rights in Austin, Texas, has long been concerned about corporate gathering of medical records.

“I’ve spent 35 years or more listening to how people have been harmed because their records went somewhere they didn’t expect,” she says. “It got to employers who either fired them or demoted them or used the information to destroy their reputation.”

“It’s just not right. I saw massive discrimination in the paper age. Exponential isn’t even a big enough word for how far and how much the data is going to be used in the information age,” she continued. “If personal health data ‘belongs’ to anyone, surely it belongs to the individual, not to any corporation that handles, stores, or transmits that information.”

To view the full article please visit: Company That Knows What Drugs Everyone Takes Going Public

Data Mining to Recruit Sick People

Companies Use Information From Data Brokers, Pharmacies, Social Networks

Some health-care companies are pulling back the curtain on medical privacy without ever accessing personal medical records, by probing readily available information from data brokers, pharmacies and social networks that offer indirect clues to an individual’s health.

Companies specializing in patient recruitment for clinical trials use hundreds of data points—from age and race to shopping habits—to identify the sick and target them with telemarketing calls and direct-mail pitches to participate in research.

“I think patients would be shocked to find out how little privacy protection they have outside of traditional health care,” says Nicolas P. Terry, professor and co-director at the Center for Law and Health at Indiana University’s law school. He adds, “Big Data essentially can operate in a HIPAA-free zone.”

FTC Commissioner Julie Brill says she is worried that the use of nonprotected consumer data can be used to deny employment or inadvertently reveal illnesses that people want kept secret. “As Big Data algorithms become more accurate and powerful, consumers need to know a lot more about the ways in which their data is used,” Ms. Brill says.

To view the full article, please visit: Data Mining to Recruit Sick People (article published December 17, 2013)

 

 

Privacy advocates fear massive fed health database

Please see the article “Privacy advocates fear massive fed health database” in Computer World, by Jaikumar Vijayan.

Many state and federal agencies either release or will soon release massive free or low cost “public use data files” without testing to make sure that our sensitive personal health information cannot be re-identified or obtaining our consent to use our health information.

Describing data bases as “anonymized” or “de-identified” lulls the public into thinking that their health records are safe and cannot be re-identified. But that isn’t true. Every method to prevent data from being re-identified should first be tested and proven.

Patient Privacy Rights recommends that any health data set should be subject to “adversarial challenge criteria” to assess the actual threats/risks of re-identification of the data before release. See “Notes About Anonymizing Data For Public Release” by Andrew Blumberg PhD at: http://patientprivacyrights.org/wp-content/uploads/2010/10/ABlumberg-anonymization-memo.pdf

After the challenge criteria are used to test the data, patients should be informed of the risk of re-identification and asked for consent to include their data.

Even the NIH had to close down a database of genetic information that was supposedly de-identified after the 141st researchers who downloaded the data base reported that they could re-identify actual patients.

It’s extremely hard to create health data sets that cannot be re-identified. Given that fact, patient consent should be required for the use of health data and patients should be informed of the risks of re-identification BEFORE their data is included in public use data sets.

Without basic protections, i.e., requiring informed consent and adversarial challenges, our health data will be used to create valuable, detailed profiles of each of us—and our own health records will be sold and used to discriminate against us in employment, credit, and other opportunities in life–not for research to improve our health and improve treatment.

Employers after DNA: GINA does not protect like you think.

See this CBS News article: Want A Job In Akron? Hand Over Your DNA

The idea that GINA protects genetic tests from being held or used by employers and insurers is wrong. Genetic tests ordered by your doctor at any other time–when you are NOT seeking a job or insurance–can be collected and used by your employer and insurer to make decisions about you.

Lobbyists for the insurance industry and employers got this massive loophole into the bill, eliminating the intended consumer protections. Instead GINA should have forbidden employers and insurers to ever collect or access genetic tests.

This is one of the key reasons we need Congress to restore OUR rights to control our personal health information, so WE can make sure employers and insurers do not get our genetic records. Genetic information is so sensitive it should ONLY be seen by health professionals directly involved in our treatment, or if we choose to participate in research and share it.