Comments: ONC studying risks of de-identified patient records
It’s nice to know that that the federal government will “analyze the science of de-identification and re-identification” before releasing health data. See article from Government Health IT: ONC studying risks of de-identified patient records (written by Mary Mosquera).
But instead of each of patient being informed about the level of risk and then deciding if that level risk is acceptable before agreeing to participate in research, the government will decide the “acceptable level of risk in order to be able to use the data”.
Two major problems need to be addressed before “de-identified” public use data (PUD) is released for “research”:
1) The “research” loophole in HIPAA allows any corporation to get access to our health data without consent, at low or no cost, simply by claiming that it is doing research. This loophole needs to be closed. Most ‘research’ use of health data today is NOT what Congress intended: i.e., research to improve patient health or to prevent illness. Instead corporations claim our data will be sued for ‘research’ when in reality they sell it or use it for business analytics. Business analytics is used by industry to discriminate against people in jobs, credit, and educational opportunities. The health data mining industry is exploiting the “research loophole” to obtain Americans’ health data to improve revenues, not to improve patient treatment or health. The name for that is fraud.
- For an example of how data is being released for research now without effective de-identification by the state of Texas see: Hospital Patient Privacy Sacrificed as State Agency Sells or Gives Away Data, Technology Used by For-Profit Companies Strips Away Inadequate Layers of Security.
2) Who decides what level of de-identification is ‘safe’ enough? Should the federal government decide for us? Or should we be able to decide what risk we are willing to accept?
Patient Privacy Rights submitted a memo to CMS highlighting the difficulties of anonymizing data for public release and advocating an “adversarial challenge” criterion for assessing the threats associated with such releases. See: NOTES ABOUT ANONYMIZING DATA FOR PUBLIC RELEASE, by Andrew J. Blumberg.
BTW—-what if banks suddenly decided that account holders would now have to accept a .04% risk of electronic theft of funds and/or a .04% rate of errors in our deposits was ‘safe’ enough? Would you accept that low a level of risk? Is any rate of theft or error acceptable for our money?
Why should we accept anything less than a zero% risk of theft or error for our health records?