Privacy could ‘crash’ big data if not done right

April 15, 2014 | By Ashley Gold | FierceHealthIT

Privacy has the potential to crash big data before there’s a chance to get it right, and finding the right balance is key to future success, experts argued at a Princeton University event earlier this month.

The event, titled “Big Data and Health: Implications for New Jersey’s Health Care System” featured four panels exploring health, privacy, cost and transparency in regard to how big data can improve care and patient outcomes, according to an article on the university’s website.

“Privacy will crash big data if we don’t get it right,” Joel Reidenberg, visiting professor of computer science at Princeton and a professor at Fordham University’s School of Law, said at the event.

To view the full article, please visit Privacy could ‘crash’ big data if not done right

 

Advances in health IT must be viewed as a whole

by Andy Oram | @praxagora | April 7, 2014

Reformers in health care claim gigantic disruption on the horizon: devices that track our movements, new treatments through massive data crunching, fluid electronic records that reflect the patient’s status wherever she goes, and even the end of the doctor’s role. But predictions in the area of health IT are singularly detached from the realities of the technical environment that are supposed to make them happen.

To help technologists, clinicians, and the rest of us judge the state of health IT, I’ve released a report titled “The Information Technology Fix for Health: Barriers and Pathways to the Use of Information Technology for Better Health Care.” It offers an overview of each area of innovation to see what’s really happening and what we need to make it progress further and faster.

To view the full article, please visit: Advances in health IT must be viewed as a whole

3 Reasons Your Medical Records Are at Risk

When hospitals find themselves in the middle of a breach, they usually prioritize improving their security to prevent further security breach incidents.

In addition to defending themselves against data breaches, health systems also need to find the right balance to adequately protect their patients’ privacy.

Since medical information is stored digitally, patients may not be fully aware how crucial it is to protect their data from being seen by unauthorized persons. Some privacy breaches may be avoidable, and learning from these mistakes is essential for health systems to maintain security of sensitive patient information. Here are three reasons why patient security may be lacking at health organizations.

Privacy Is on the Back Burner

When health IT systems are built, ensuring patient privacy is usually not on the forefront of designers’ and engineers’ minds. These IT experts usually put system functions ahead of privacy, which could result in poor privacy protection down the road. Some developers may also leave out privacy features altogether, which could put patient information at risk for being compromised.

Human Error

In a recent report, psychiatric facilities in Texas suffered a string of data breaches, but the majority of them were caused by human error, The Republic reported.

Deborah Peel, the Austin founder of watchdog group Patient Privacy Rights, said repeated data breach incidents could lead patients to question whether their information is secure, which could cultivate distrust among patients. “Our patients deserve privacy and expect that their information is kept confidential,” said Christine Mann, spokeswoman for the Texas Department of State Health Services.

To view the full article please visit: 3 Reasons Your Medical Records Are at Risk

Guest Article: The Causes of Digital Patient Privacy Loss in EHRs and Other Health IT Systems

Check out the latest from Shahid Shah, courtesy of The Healthcare IT Guy.

This past Friday I was invited by the Patient Privacy Rights (PPR) Foundation to lead a discussion about privacy and EHRs. The discussion, entitled “Fact vs. Fiction: Best Privacy Practices for EHRs in the Cloud,” addressed patient privacy concerns and potential solutions for doctors working with EHRs.

While we are all somewhat disturbed by the slow erosion of privacy in all aspects of our digital lives, the rather rapid loss of patient privacy around health data is especially unnerving because healthcare is so near and dear to us all. In order to make sure we provided some actionable intelligence during the PPR discussion, I started the talk off giving some of the reasons why we’re losing patient privacy in the hopes that it might foster innovators to think about ways of slowing down inevitable losses.

Here are some of the causes I mentioned on Friday, not in any particular order:

  • Most patients, even technically astute ones, don’t really understand the concept of digital privacy. Digital is a “cyber world” and not easy to picture so patients believe their data and privacy is protected when it may not be. I usually explain patient privacy in the digital world to non-techies using the analogy of curtains, doors, and windows. The digital health IT world of today is like walking into a patient’s room in a hospital in which it’s a large shared space with no curtains, no walls, no doors, etc. (even for bathrooms or showers!). In this imaginary world, every private conversation occurs so that others can hear it, all procedures are performed in front of others, etc. without the patient’s consent and their objections don’t even matter. If they can imagine that scenario, then patients will probably have a good idea about how digital privacy is conducted today — a big shared room where everyone sees and hears everything even over patients’ objections.
  • It’s faster and easier to create non-privacy-aware IT solutions than privacy-aware ones. Having built dozens of HIPAA-compliant and highly secure enterprise health IT systems for decades, my anecdotal experience is that when it comes to features and functions vs. privacy, features win. Product designers, architects, and engineers talk the talk but given the difficulties of creating viable systems in a coordinated, integrated digital ecosystem it’s really hard to walk the privacy walk  Because digital privacy is so hard to describe even in simple single enterprise systems, the difficulty of describing and defining it across multiple integrated systems is often the reason for poor privacy features in modern systems.
  • It’s less expensive to create non-privacy-aware IT solutions. Because designing privacy into the software from the beginning is hard and requires expensive security resources to do so, we often see developers wait until the end of the process to consider privacy. Privacy can no more be added on top of an existing system than security can — either it’s built into the functionality or it’s just going to be missing. Because it’s cheaper to leave it out, it’s often left out.
  • The government is incentivizing and certifying functionality over privacy and security. All the meaningful use certification and testing steps are focused too much on prescribed functionality and not enough on data-centric privacy capabilities such as notifications, disclosure tracking, and compartmentalization. If privacy was important in EHRs then the NIST test plans would cover that. Privacy is difficult to define and even more difficult to implement so the testing process doesn’t focus on it at this time.
  • Business models that favor privacy loss tend to be more profitable. Data aggregation and homogenization, resale, secondary use, and related business models tend to be quite profitable. The only way they will remain profitable is to have easy and unfettered (low friction) ways of sharing and aggregating data. Because enhanced privacy through opt-in processes, disclosures, and notifications would end up reducing data sharing and potentially reducing revenues and profit, we see that privacy loss is going to happen with inevitable rise of EHRs.
  • Patients don’t really demand privacy from their providers or IT solutions in the same way they demand other things. We like to think that all patients demand digital privacy for their data. However, it’s rare for patients to choose physicians, health systems, or other care providers based on their privacy views. Even when privacy violations are found and punished, it’s uncommon for patients to switch to other providers.
  • Regulations like HIPAA have made is easy for privacy loss to occur. HIPAA has probably done more to harm privacy over the past decade than any other government regulations. More on this in a later post.

The only way to improve privacy across the digital spectrum is to realize that health providers need to conduct business in a tricky intermediary-driven health system with sometimes conflicting business goals like reduction of medical errors or lower cost (which can only come with more data sharing, not less). Digital patient privacy is important but there are many valid reasons why privacy is either hard or impossible to achieve in today’s environment. Unless we intelligently and honestly understand why we lose patient privacy we can’t really create novel and unique solutions to help curb the loss.

What do you think? What other causes of digital patient privacy loss would you add to my list above?

Courtesy of The Healthcare IT Guy.

3 reasons for the demise of patient privacy

By Dan Bowman from FierceHealthIT

Several factors have contributed to the demise of patient privacy in recent years, according to software analyst and healthcare blogger Shahid Shah (a.k.a., The Health IT Guy).

For example, Shah said at a recent discussion hosted by the Patient Privacy Rights Foundation on the best privacy practices for electronic health records in the cloud, patients tend to not “demand” privacy as the cost of doing business with providers.

“It’s rare for patients to choose physicians, health systems or other care providers based on their privacy views,” Shah said in a blog post summarizing thoughts he shared at the event. “Even when privacy violations are found and punished, it’s uncommon for patients to switch to other providers.”

To view the full article visit 3 reasons for the demise of patient privacy

 

Testimony of Deborah C. Peel, MD at the ONC’s Patient Matching Stakeholder Meeting

WASHINGTON, DC (December 16, 2013) – Patient Privacy Rights’ (PPR) founder and chair, Deborah C. Peel, MD, submitted written testimony to the U.S. Department of Health and Human Services’ Office of the National Coordinator (ONC) at today’s Patient Matching Stakeholder Meeting. The meeting discussed the initial findings from the ONC’s dedicated initiative to assess which aspects of patient identification matching are working well, where there are gaps, and where improvements are needed.

 

In her prepared testimony, Dr. Peel said that “the Initial Findings address the problems caused by current institutional health information technology (health IT) systems and data exchanges.” However, she also stated that the findings may not adequately address future needs, nor do they foresee how the meaningful use requirements for the Health Information Technology for Clinical Health (HITECH) Act can resolve many of the current problems with patient identity and patient matching.

 

Arguing that the findings present a tremendous opportunity to create and leverage genuine patient engagement, Dr. Peel said that “patients have more interest and stake in data integrity and safety than any other stakeholder.” Describing PPR’s vision of the future, Dr. Peel outlined how meaningful patient engagement will eliminate many of the complex problems caused by current patient identity systems, matching technologies, and algorithms. She also said that meaningful patient engagement means that patients can access, control, or delegate how their personal information is used and disclosed, as well as monitor all exchanges of their health data in real time.

 

Additionally, Dr. Peel discussed key elements for meaningful patient engagement based on Fair Information Practices (FIPs) and federal law. She said that all data holders and all health data aggregators should operate as HIPAA covered entities and should be known to patients. In order to provide accountability and transparency, she said that each data aggregator should provide Notice of Privacy Practices (NPPs), voluntary patient-controlled IDs, patient and physician portals, Direct Secure email between patients and physicians Blue Button Plus (BB+), and real time accounting of disclosures.

 

In her concluding remarks, Dr. Peel stated that polices and best practices should consider how future health IT systems and data exchanges will operate, and should “anticipate meaningful patient and physician engagement, lowering costs, improving data quality, integrity and patient safety.” She urged the ONC to require, promote, and incentivize the rapid adoption of technologies that meaningfully engage patients as described in her testimony.
The complete text of this testimony is here.

ACP Supports Creating National Rx Drug Monitoring Database

Wednesday, December 11, 2013
 
The American College of Physicians supports the development of a national prescription drug monitoring program, which would create a single database that physicians and pharmacies could electronically review before prescribing controlled substances, according to a position paper, CBS News reports. The paper was published in the Annals of Internal Medicine on Monday (Jaslow, CBS News, 12/9).

 

A new national drug data base will extend the failed “War on Drugs”, criminalize millions more, increase patients’ reluctance to use controlled substances, and NOT improve treatment for addiction. US prescriptions are already collected and sold daily by prescription data aggregators like IMS Health, Merck Medco, SureScripts, etc., etc. These businesses all sell the nation’s prescription data to any willing buyers.Meanwhile neither physicians nor patients can get electronic copies of prescription data to improve care.Who should health technology benefit? Patients or corporations?

Why not use patients’ prescription data, already being collected by the hidden data aggregation industry, to improve patient health?

Why not use technology to strengthen the patient-physician relationship and to ensure effective diagnosis and treatment?

For example, here is one way technology could be re-designed to help patients:

Anytime a patient gets a controlled substance prescription, existing systems could automatically search for any prior controlled substance prescriptions the patient received in the last month. If a second or third prescription is found, the physician(s) and patient could be automatically notified and resolve together whether it should be filled or not—and how best to treat the patient’s symptoms

Technology should give patients and doctors they data they need for effective TREATMENT. It’s sad that such a prominent physician group supports giving law enforcement automatic access to every controlled substance prescription in the US. Law enforcement should only be able to access such sensitive patient data AFTER someone has committed a crime or with a judge’s approval.

Why open ALL prescriptions to law enforcement surveillance when the vast majority of patients taking controlled substances are not criminals?

Addiction is NOT a crime, it’s a very treatable medical illness.

deb

 

Can we at least try not to kill 440,000 patients per year?

Check out the latest from Doc Searls, courtesy of Doc Searls Weblog.

Obamacare matters. But the debate about it also misdirects attention away from massive collateral damage to patients. How massive? Dig To Make Hospitals Less Deadly, a Dose of Data, by Tina Rosenberg in The New York Times. She writes,

Until very recently, health care experts believed that preventable hospital error caused some 98,000 deaths a year in the United States — a figure based on 1984 data. But a new report from the Journal of Patient Safety using updated data holds such error responsible for many more deaths — probably around some 440,000 per year. That’s one-sixth of all deaths nationally, making preventable hospital error the third leading cause of death in the United States. And 10 to 20 times that many people suffer nonlethal but serious harm as a result of hospital mistakes.

The bold-facing is mine. In 2003, one of those statistics was my mother. I too came close in 2008, though the mistake in that case wasn’t a hospital’s, but rather a consequence of incompatibility between different silo’d systems for viewing MRIs, and an ill-informed rush into a diagnostic procedure that proved unnecessary and caused pancreatitis (which happens in 5% of those performed — I happened to be that one in twenty). That event, my doctors told me, increased my long-term risk of pancreatic cancer.

Risk is the game we’re playing here: the weighing of costs and benefits, based on available information. Thus health care is primarily the risk-weighing business we call insurance. For generations, the primary customers of health care — the ones who pay for the services — have been insurance companies. Their business is selling bets on outcomes to us, to our employers, or both. They play that game, to a large extent, by knowing more than we do. Asymmetrical knowledge R them.

Now think about the data involved. Insurance companies live in a world of data. That world is getting bigger and bigger. And yet, McKinsey tells us, it’s not big enough. In The big-data revolution in US health care: Accelerating value and innovation (subtitle: Big data could transform the health-care sector, but the industry must undergo fundamental changes before stakeholders can capture its full value), McKinsey writes,

Fiscal concerns, perhaps more than any other factor, are driving the demand for big-data applications. After more than 20 years of steady increases, health-care expenses now represent 17.6 percent of GDP—nearly $600 billion more than the expected benchmark for a nation of the United States’s size and wealth.1 To discourage overutilization, many payors have shifted from fee-for-service compensation, which rewards physicians for treatment volume, to risk-sharing arrangements that prioritize outcomes. Under the new schemes, when treatments deliver the desired results, provider compensation may be less than before. Payors are also entering similar agreements with pharmaceutical companies and basing reimbursement on a drug’s ability to improve patient health. In this new environment, health-care stakeholders have greater incentives to compile and exchange information.

While health-care costs may be paramount in big data’s rise, clinical trends also play a role. Physicians have traditionally used their judgment when making treatment decisions, but in the last few years there has been a move toward evidence-based medicine, which involves systematically reviewing clinical data and making treatment decisions based on the best available information. Aggregating individual data sets into big-data algorithms often provides the most robust evidence, since nuances in subpopulations (such as the presence of patients with gluten allergies) may be so rare that they are not readily apparent in small samples.

Although the health-care industry has lagged behind sectors like retail and banking in the use of big data—partly because of concerns about patient confidentiality—it could soon catch up. First movers in the data sphere are already achieving positive results, which is prompting other stakeholders to take action, lest they be left behind. These developments are encouraging, but they also raise an important question: is the health-care industry prepared to capture big data’s full potential, or are there roadblocks that will hamper its use

The word “patient” appears nowhere in that long passage. The word “stakeholder” appears twice, plus eight more times in the whole piece. Still, McKinsey brooks some respect for the patient, though more as a metric zone than as a holder of a stake in outcomes:

Health-care stakeholders are well versed in capturing value and have developed many levers to assist with this goal. But traditional tools do not always take complete advantage of the insights that big data can provide. Unit-price discounts, for instance, are based primarily on contracting and negotiating leverage. And like most other well-established health-care value levers, they focus solely on reducing costs rather than improving patient outcomes. Although these tools will continue to play an important role, stakeholders will only benefit from big data if they take a more holistic, patient-centered approach to value, one that focuses equally on health-care spending and treatment outcomes.

McKinsey’s customers are not you and me. They are business executives, many of which work in health care. As players in their game, we have zero influence. As voters in the democracy game, however, we have a bit more. That’s one reason we elected Barack Obama.

So, viewed from the level at which it plays out, the debate over health care, at least in the U.S., is between those who believe in addressing problems with business (especially the big kind) and those who believe in addressing problems with policy (especially the big kind, such as Obamacare).

Big business has been winning, mostly. This is why Obamacare turned out to be a set of policy tweaks on a business that was already highly regulated, mostly by captive lawmakers and regulators.

Meanwhile we have this irony to contemplate: while dying of bad data at a rate rivaling war and plague, our physical bodies are being doubled into digital ones. It is now possible to know one’s entire genome, including clear markers of risks such as cancer and dementia. That’s in addition to being able to know one’s quantified self (QS), plus one’s health care history.

Yet all of that data is scattered and silo’d. This is why it is hard to integrate all our available QS data, and nearly impossible to integrate all our health care history. After I left the Harvard University Health Services (HUHS) system in 2010, my doctor at the time (Richard Donohue, MD, whom I recommend highly) obtained and handed over to me the entirety of my records from HUHS. It’s not data, however. It’s a pile of paper, as thick as the Manhattan phone book. Its utility to other doctors verges on nil. Such is the nature of the bizarre information asymmetry (and burial) in the current system.

On top of that, our health care system incentivizes us to conceal our history, especially if any of that history puts us in a higher risk category, sure to pay more in health insurance premiums.

But what happens when we solve these problems, and our digital selves become fully knowable — by both our selves and our health care providers? What happens to the risk calculation business we have today, which rationalizes more than 400,000 snuffed souls per annum as collateral damage? Do we go to single-payer then, for the simple reason that the best risk calculations are based on the nation’s entire population?

I don’t know.

I do know the current system doesn’t want to go there, on either the business or the policy side. But it will. Inevitably.

At the end of whatever day this is, our physical selves will know our data selves better than any system built to hoard and manage our personal data for their interests more than for ours. When that happens the current system will break, and another one will take its place.

How many more of us will die needlessly in the meantime? And does knowing (or guessing at) that number make any difference? It hasn’t so far.

But that shouldn’t stop us. Hats off to leadership in the direction of actually solving these problems, starting with Adrian Gropper, ePatient Dave, Patient Privacy RightsBrian Behlendorf, Esther Dyson, John Wilbanks, Tom Munnecke and countless other good people and organizations who have been pushing this rock up a hill for a long time, and aren’t about to stop. (Send Doc more names or add comments directly to this blog here.)

Courtesy of Doc Searls Weblog

Texas Election 2014: Abbott Pledges to Safeguard DNA

“Texas gubernatorial frontrunner Greg Abbott recently released an extensive list of items he says he’ll push for once elected.. Ths list includes gun rights, campaign ethics, and blocking implementation of the Affordable Care Act, but the number one item is safeguarding your DNA according to KUT News.”

To view the full article, please visit: Texas Election 2014: Abbott Pledges to Safeguard DNA

Will Texans Own Their DNA?

Will Texans Own Their DNA?

Greg Abbott, candidate for Governor, thinks they should

 

On November 12th, Abbott released his “We the People Plan” for Texas. Clearly he’s heard from Texans who want tough new health data privacy protections.

 

Topping his list are four terrific privacy recommendations for health and genetic data:

  • “Recognize a property right in one’s own DNA.”
  • “Make state agencies, before selling database information, acquire the consent of any individual whose data is to be released.”
  • “Prohibit data resale and anonymous purchasing by third parties.”
  • “Prohibit the use of cross referencing techniques to identify individuals whose data is used as a larger set of information in an online data base.”

 

The Omnibus Privacy Rule operationalized the technology section of the stimulus bill. It also clarified that states can pass data privacy laws that are stronger than HIPAA (which is a very weak floor for data protections).

 

Texans would overwhelmingly support the new state data protection laws Abbott recommends . If elected, hopefully Abbott would also include strong penalties for violations. Contracts don’t enforce themselves. External auditing and proof of trustworthy practices should be required.

 

Is this the beginning of a national trend?  I think so.

 

The more the public learns about today’s health IT systems, the more they will reject health surveillance technologies that steal and sell sensitive personal health data.