Statistical biases due to anonymization evaluated in an open clinical dataset from COVID-19 patients

NAPKON Study Group, NAPKON Use & Access Committee, NAPKON Steering Committee, NAPKON Study Site Group, NAPKON Infrastructure Group

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Anonymization has the potential to foster the sharing of medical data. State-of-the-art methods use mathematical models to modify data to reduce privacy risks. However, the degree of protection must be balanced against the impact on statistical properties. We studied an extreme case of this trade-off: the statistical validity of an open medical dataset based on the German National Pandemic Cohort Network (NAPKON), which was prepared for publication using a strong anonymization procedure. Descriptive statistics and results of regression analyses were compared before and after anonymization of multiple variants of the original dataset. Despite significant differences in value distributions, the statistical bias was found to be small in all cases. In the regression analyses, the median absolute deviations of the estimated adjusted odds ratios for different sample sizes ranged from 0.01 [minimum = 0, maximum = 0.58] to 0.52 [minimum = 0.25, maximum = 0.91]. Disproportionate impact on the statistical properties of data is a common argument against the use of anonymization. Our analysis demonstrates that anonymization can actually preserve validity of statistical results in relatively low-dimensional data.

Original languageEnglish
Article number776
JournalScientific Data
Volume9
Issue number1
DOIs
StatePublished - Dec 2022

Fingerprint

Dive into the research topics of 'Statistical biases due to anonymization evaluated in an open clinical dataset from COVID-19 patients'. Together they form a unique fingerprint.

Cite this