Researchers Reveal That Anonymised Data Is Easy To Reverse Engineer

Researchers Reveal That Anonymised Data Is Easy To Reverse Engineer

Merely existing in the modern world means giving up a wealth of your information to countless institutions and services. While many of the places make promises to keep your identifiable data as secure and private as possible, they can still and oftentimes do, share anonymised versions of your data to third parties, whether that’s for research or for profit. But new research indicates that even when data is stripped of any identifiable factors, it doesn’t require a lot of mental gymnastics to piece together certain information and figure out, with pretty high confidence, who the “anonymous” user in the dataset is.

In other words, anonymised data is not so anonymous.

Researchers at Imperial College London published a paper in Nature Communications on Wednesday that explored how inadequate current techniques to anonymise datasets are. Before a company shares a dataset, they will remove identifying information such as names and email addresses, but the researchers were able to game this system.

Using a machine learning model and datasets that included up to 15 identifiable characteristics — such as age, gender and marital status — the researchers were able to accurately re-identify 99.98 per cent of Americans in an anonymised dataset, according to the study.

For their analyses, the researchers used 210 different data sets that were gathered from five sources including the U.S. government that featured information on more than 11 million individuals. Specifically, the researchers define their findings as a successful effort to propose and validate “a statistical model to quantify the likelihood for a re-identification attempt to be successful, even if the disclosed dataset is heavily incomplete.”

The study gave a hypothetical in which a health insurance company publishes an anonymised dataset of 1000 people, amounting to one per cent of their total customers in California. The dataset includes the individual’s birth date, gender, ZIP code and breast cancer diagnosis.

The boss of one of the individuals in the dataset sees that there’s someone who is male, lives in that individual’s zip code, has the same birth date and, according to the dataset, is diagnosed with breast cancer and didn’t have successful stage IV treatments. But the insurance company can argue that, while this uniquely specific data to the employer matches the record in their file, it’s possible it can be any of the other tens of thousands of insured customers if that individual is even insured with that company.

“While there might be a lot of people who are in their thirties, male and living in New York City, far fewer of them were also born on 5 January, are driving a red sports car and live with two kids (both girls) and one dog,” Dr. Luc Rocher of UCLouvain, an author on the paper, said in a statement.

Senior author Dr. Yves-Alexandre de Montjoye, a researcher at Imperial’s Department of Computing and Data Science Institute, characterised such attributes as “pretty standard information for companies to ask for.”

Even the hypothetical illustrated by the researchers in the study isn’t a distant fiction. In June of this year, a patient at the University of Chicago Medical Center in the U.S. filed a class-action lawsuit against both the private research university and Google for the former sharing his data with the latter without his consent.

The medical centre allegedly de-identified the dataset, but still gave Google records with the patient’s height, weight, vital signs, information on diseases they have, medical procedures they’ve undergone, medications they are on and date stamps.

The complaint pointed out that aside from the breach of privacy in sharing intimate data without a patient’s consent, that even if it was in some way anonymised, the tools available to a powerful tech corporation make it pretty easy for them to reverse engineer that information and identify a patient.

“Companies and governments have downplayed the risk of re-identication by arguing that the datasets they sell are always incomplete,” de Montjoye said in a statement. “Our findings contradict this and demonstrate that an attacker could easily and accurately estimate the likelihood that the record they found belongs to the person they are looking for.”

The researchers put the onus on policymakers to create better standards for anonymisation techniques in order to ensure that the sharing of datasets doesn’t continue to be a potentially far-reaching invasion of privacy.

Some of the most powerful and exploitative companies in the world are obtaining datasets that provide enough information to confidently identify someone included — the consequences for either those companies or malicious actors to piece together a puzzle and create a fully-formed picture of someone given just a handful of identifying characteristics are insidious and the researchers ability to identify such a large amount of de-identified users with only 15 attributes indicates that we need to reevaluate what constitutes an ethical anonymised dataset.

“The goal of anonymisation is so we can use data to benefit society,” de Montjoye said. “This is extremely important but should not and does not have to happen at the expense of people’s privacy.”


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.