TY - GEN
T1 - Bias in Unsupervised Anomaly Detection in Brain MRI
AU - Bercea, Cosmin I.
AU - Puyol-Antón, Esther
AU - Wiestler, Benedikt
AU - Rueckert, Daniel
AU - Schnabel, Julia A.
AU - King, Andrew P.
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.
PY - 2023
Y1 - 2023
N2 - Unsupervised anomaly detection methods offer a promising and flexible alternative to supervised approaches, holding the potential to revolutionize medical scan analysis and enhance diagnostic performance. In the current landscape, it is commonly assumed that differences between a test case and the training distribution are attributed solely to pathological conditions, implying that any disparity indicates an anomaly. However, the presence of other potential sources of distributional shift, including scanner, age, sex, or race, is frequently overlooked. These shifts can significantly impact the accuracy of the anomaly detection task. Prominent instances of such failures have sparked concerns regarding the bias, credibility, and fairness of anomaly detection. This work presents a novel analysis of biases in unsupervised anomaly detection. By examining potential non-pathological distributional shifts between the training and testing distributions, we shed light on the extent of these biases and their influence on anomaly detection results. Moreover, this study examines the algorithmic limitations that arise due to biases, providing valuable insights into the challenges encountered by anomaly detection algorithms in accurately capturing the variability in the normative distribution. Here, we specifically investigate Alzheimer’s disease detection from brain MR imaging as a case study, revealing significant biases related to sex, race, and scanner variations that substantially impact the results. These findings align with the broader goal of improving the reliability, fairness, and effectiveness of anomaly detection.
AB - Unsupervised anomaly detection methods offer a promising and flexible alternative to supervised approaches, holding the potential to revolutionize medical scan analysis and enhance diagnostic performance. In the current landscape, it is commonly assumed that differences between a test case and the training distribution are attributed solely to pathological conditions, implying that any disparity indicates an anomaly. However, the presence of other potential sources of distributional shift, including scanner, age, sex, or race, is frequently overlooked. These shifts can significantly impact the accuracy of the anomaly detection task. Prominent instances of such failures have sparked concerns regarding the bias, credibility, and fairness of anomaly detection. This work presents a novel analysis of biases in unsupervised anomaly detection. By examining potential non-pathological distributional shifts between the training and testing distributions, we shed light on the extent of these biases and their influence on anomaly detection results. Moreover, this study examines the algorithmic limitations that arise due to biases, providing valuable insights into the challenges encountered by anomaly detection algorithms in accurately capturing the variability in the normative distribution. Here, we specifically investigate Alzheimer’s disease detection from brain MR imaging as a case study, revealing significant biases related to sex, race, and scanner variations that substantially impact the results. These findings align with the broader goal of improving the reliability, fairness, and effectiveness of anomaly detection.
KW - Bias
KW - Fairness
KW - Unsupervised Anomaly Detection
UR - http://www.scopus.com/inward/record.url?scp=85175832256&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-45249-9_12
DO - 10.1007/978-3-031-45249-9_12
M3 - Conference contribution
AN - SCOPUS:85175832256
SN - 9783031452482
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 122
EP - 131
BT - Clinical Image-Based Procedures, Fairness of AI in Medical Imaging, and Ethical and Philosophical Issues in Medical Imaging - 12th International Workshop, CLIP 2023 1st International Workshop, FAIMI 2023 and 2nd International Workshop, EPIMI 2023, Proceedings
A2 - Wesarg, Stefan
A2 - Oyarzun Laura, Cristina
A2 - Puyol Antón, Esther
A2 - King, Andrew P.
A2 - Baxter, John S.H.
A2 - Erdt, Marius
A2 - Drechsler, Klaus
A2 - Freiman, Moti
A2 - Chen, Yufei
A2 - Rekik, Islem
A2 - Eagleson, Roy
A2 - Feragen, Aasa
A2 - Cheplygina, Veronika
A2 - Ganz-Benjaminsen, Melani
A2 - Ferrante, Enzo
A2 - Glocker, Ben
A2 - Moyer, Daniel
A2 - Petersen, Eikel
PB - Springer Science and Business Media Deutschland GmbH
T2 - 12th International Workshop on Clinical Image-Based Procedures, CLIP 2023, 1st MICCAI Workshop on Fairness of AI in Medical Imaging, FAIMI 2023, held in conjunction with MICCAI 2023 and 2nd MICCAI Workshop on the Ethical and Philosophical Issues in Medical Imaging, EPIMI 2023
Y2 - 12 October 2023 through 12 October 2023
ER -