Why classic anonymization fails

What is classic anonymization?

With classic anonymization, we imply all methodologies where one manipulates or distorts an original dataset to hinder tracing back individuals.

Typical examples of classic anonymization that we see in practice are generalization, suppression / wiping, pseudonymization and row and column shuffling.

Hereby those techniques with corresponding examples.

TechniqueOriginal dataManipulated data
Generalization27 years oldBetween 25 and 30 years old
Suppression / Wipinginfo@syntho.aixxxx@xxxxxx.xx
PseudonymizationAmsterdamhVFD6td3jdHHj78ghdgrewui6
Row and column shufflingAlignedShuffled

What are the disadvantages of classic anonymization?

Manipulating a dataset with classic anonymization techniques results in 2 keys disadvantages:

  1. Distorting a dataset results in decreased data quality (i.e. data utility). This introduces the classic garbage-in garbage-out principle.
  2. Privacy risk will be reduced, but will always be present. It stays and manipulated version of the original dataset with 1-1 relations.

We demonstrate those 2 key disadvantages, data utility and privacy protection. We do that  with the following illustration with applied suppression and generalization.

Note: we use images for illustrative purposes. The same principle holds for structured datasets.

Classic anonymization fails
  • Left: little application of classic anonymization result in a representative illustration. However, the individual can easily be identified and privacy risk is significant.
  • Right: severe application of classic anonymization results in strong privacy protection. However, the illustration becomes useless.

Classic anonymization techniques offer a suboptimal combination between data-utility and privacy protection.

This introduces the trade-off between data utility and privacy protection, where classic anonymization techniques always offer a suboptimal combination of both. 

Classic anonymization fails

Is removing all direct identifiers (such as names) from the dataset a solution?

No. This is a big misconception and does not result in anonymous data. Do you still apply this as way to anonymize your dataset? Then this blog is a must read for you.

How is Synthetic Data different?

Syntho develops software to generate an entirely new dataset of fresh data records. Information to identify real individuals is simply not present in a synthetic dataset. Since synthetic data contains artificial data records generated by software, personal data is simply not present resulting in a situation with no privacy risks.

The key difference at Syntho: we apply machine learning. Consequently, our solution reproduces the structure and properties of the original dataset in the synthetic dataset resulting in maximized data-utility. Accordingly, you will be able to obtain the same results when analyzing the synthetic data as compared to using the original data.

This case study demonstrates highlights from our quality report containing various statistics from synthetic data generated through our Syntho Engine in comparison to the original data.

In conclusion, synthetic data is the preferred solution to overcome the typical sub-optimal trade-off between data-utility and privacy-protection, that all classic anonymization techniques offer you.

Why classic anonymization fails 1

So, why use real (sensitive) data when you can use synthetic data?

In conclusion, from a data-utility and privacy protection perspective, one should always opt for synthetic data when your use-case allows so.

 Value for analysisPrivacy risk
Synthetic dataHighNone
Real (personal) dataHighHigh
Manipulated data (through classic ‘anonymization’)Low-MediumMedium-High

Synthetic data by Syntho fills the gaps where classic anonymization techniques fall short by maximizing both data-utility and privacy-protection.

Interested?

Explore the added value of Synthetic Data with us