Dangers of healthcare data collection

Ghostery Icon
Jordanna Kalkhof February 18, 2020

Share This Post

Try Ghostery

Now that we’ve covered the benefits of healthcare data collection, let’s take a closer look at the accompanying dangers. Here’s a quick recap of the list before we get started:

Benefits

  • Valuable research and insights
  • Personalization and the ability to relate symptoms
  • Predictive capabilities and epidemic prevention
  • Developing AI
  • Cost reduction

Dangers

  • Data breaches and security vulnerability
  • Privacy concerns between healthcare companies and other third parties
  • Becoming too personal
  • Increased health insurance rates and employment risks

 

Hacks, breaches, and vulnerabilities 

It seems as though every day there’s news about another hack, data breach, or some other vulnerability that has exposed personal data. These exposures have crossed into just about every industry – from finance to hospitality to, you guessed it, healthcare. CNET recapped some of the biggest data breaches of 2019, stating, “The total number of breaches was up 33% over last year, according to research from Risk Based Security, with medical services, retailers and public entities most affected. That’s a whopping 5,183 data breaches for a total of 7.9 billion exposed records.” That’s A LOT of data. Some of these records may seem trivial, such as first name and last name, but they can also be extremely personal, including things like financial and medical information. Unfortunately, once this data is out there, there really is no getting it back. Suddenly your medical conditions and prescriptions are no longer private, which can impact your life in more ways than one. Until healthcare data can be collected and stored securely, there will always be some level of resistance by the public – can you blame them?

Involving third-parties

Healthcare companies also share your data willingly for the sake of research and development. In some cases, the data is anonymized before it is shared – removing any obviously identifiable information from the records. However, this isn’t always as anonymous as companies may think. Seemingly innocuous information can still leave breadcrumbs that lead back to specific individuals, as was seen in the lawsuit involving Google DeepMind and medical timestamps. Gina Neff, author of an article in the National Library of Medicine, put it this way: “If data from just a few pieces of less-protected demographic information can reidentify someone, imagine what adding genetic information or disease conditions could mean for privacy risks in large-scale shared and pooled data.” There are also cases when medical records are shared with third parties without any anonymization. Such is the case in Google’s partnership with Ascension, also known as Project Nightingale. Tens of millions of personal health data records are being shared through this partnership. How is that allowed? HIPAA privacy laws allow for the sharing of medical information between business associates under certain guides; unfortunately, the definition of “business associate” and the permitted use cases are somewhat vague. In these scenarios, patients are often unaware their data is being shared and lack the ability to opt out even if they were. It’s no secret that Google is in the business of building extremely detailed user profiles. Therefore, it is justified to be concerned when they’re handling such sensitive information. After the announcement of Google’s partnership with the Mayo Clinic, Lawrence Gostin, an expert in health data privacy laws at Georgetown Law School, told Wired, “The problem is Google’s business model is to use or sell data… I’m far from convinced that Google might not use identifiable information for its business purposes.” Until more regulations are put in place, there seems to be no end to the scope of personal information that big tech companies can get their hands on.

That’s too personal

This begs the question, how personal is too personal? The answer here is unclear due to individual preferences. Our daily lives are becoming more and more technologically connected – and we love it. Things like smart home devices and wearables are making users’ lives easier and often provide useful information. But just because we want to know this information, doesn’t mean we want others to know it as well. Take Fitbit for example. Fitbit has over 28 million active users who use their devices and the information they provide to live healthier lives. Back in November, Fitbit was acquired by Google – surprise, surprise – and many users expressed privacy concerns about yet another data pool the tech giant would have on them. Bruce Lee, a senior contributor at Forbes, comically explained the potential for further ad-targeting, saying, “Waiting for the day that your device will say, ‘You’ve been on the toilet for a really long time, and your heart rate seems to increase periodically. Here are a bunch of stool softeners that you can buy.’” While this may be a humorous exaggeration, it’s also not hard to imagine. Lifestyle data, medical information, demographics, and more are all overlapping to present companies with a holistic view of you as an individual.

Insurance and employment

Another danger to the collection and sharing of healthcare data is the potential effects it poses on insurance rates and employment. In the Fitbit article from above, Lee notes this privacy concern as well, stating, “Another is such data being offered to employers, insurance companies, financial firms, and others who may be interested in knowing people’s health and disease status. Could a company be less inclined to hire, insure, offer credit to, or invest in you if you have certain ‘pre-existing conditions’ or are deemed by an algorithm as a ‘health risk’?” This is arguably unethical but not out of the realm of possibility. Health insurance companies gather all types of information to feed computer algorithms that assess risk and make cost predictions. There are endless amounts of public information for these insurers to pull from, including information collected by data brokers and even social media platforms. Optum, a big name in the health data industry, has boasted about their ability to tie a person’s social media posts to their clinical and financial information. Now, hypothetically speaking, that late-night Twitter post while you were “in your feels”, has the potential to present you as at risk for depression; depression can be expensive to treat so you should expect a higher health insurance rate. Many insurance companies claim this type of non-medical information does not impact pricing, but sources have stated before that it does happen. Similarly, there’s a risk for this type of data usage when it comes to employment. Employers could theoretically filter out job candidates based on personal medical information. For example, if a candidate had a medical condition that posed a risk of needing to miss work, employers could make assumptions and deem them unfit for the job. Health data is powerful information. But as the saying goes, with great power comes great responsibility, and there is no guarantee that this information will be used responsibly.

 

Now that you have a well-rounded view of healthcare data collection, you can decide if you are for or against the use of Big Data in the medical industry. Many of us fall somewhere in between, seeing a case for both the benefits and the dangers, and that’s okay too. Come back next week as we wrap up this month’s healthcare theme with a post about wearables and how to adjust their privacy settings.

Subscribe to our newsletter