TY - GEN
T1 - Fair and Private CT Contrast Agent Detection
AU - Kaess, Philipp
AU - Ziller, Alexander
AU - Mantz, Lea
AU - Rueckert, Daniel
AU - Fintelmann, Florian J.
AU - Kaissis, Georgios
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
PY - 2025
Y1 - 2025
N2 - Intravenous (IV) contrast agents are an established medical tool to enhance the visibility of certain structures. However, their application substantially changes the appearance of Computed Tomography (CT) images, which - if unknown - can significantly deteriorate the diagnostic performance of neural networks. Artificial Intelligence (AI) can help to detect IV contrast, reducing the need for labour-intensive and error-prone manual labeling. However, we demonstrate that automated contrast detection can lead to discrimination against demographic subgroups. Moreover, it has been shown repeatedly that AI models can leak private training data. In this work, we analyse the fairness of conventional and privacy-preserving AI models during the detection of IV contrast on CT images. Specifically, we present models which are substantially fairer compared to a previously published baseline. For better comparability, we extend existing metrics to quantify the fairness of a model on a protected attribute in a single value. We provide a model, fulfilling a strict Differential Privacy protection of (ε,δ)=(8,2.8·10-3), which with an accuracy of 97.42% performs 5%-points better than the baseline. Additionally, while confirming prior works, that strict privacy preservation increases the discrimination against underrepresented subgroups, the proposed model is fairer than the baseline over all metrics considering race and sex as protected attributes, which extends to age for a more relaxed privacy guarantee.
AB - Intravenous (IV) contrast agents are an established medical tool to enhance the visibility of certain structures. However, their application substantially changes the appearance of Computed Tomography (CT) images, which - if unknown - can significantly deteriorate the diagnostic performance of neural networks. Artificial Intelligence (AI) can help to detect IV contrast, reducing the need for labour-intensive and error-prone manual labeling. However, we demonstrate that automated contrast detection can lead to discrimination against demographic subgroups. Moreover, it has been shown repeatedly that AI models can leak private training data. In this work, we analyse the fairness of conventional and privacy-preserving AI models during the detection of IV contrast on CT images. Specifically, we present models which are substantially fairer compared to a previously published baseline. For better comparability, we extend existing metrics to quantify the fairness of a model on a protected attribute in a single value. We provide a model, fulfilling a strict Differential Privacy protection of (ε,δ)=(8,2.8·10-3), which with an accuracy of 97.42% performs 5%-points better than the baseline. Additionally, while confirming prior works, that strict privacy preservation increases the discrimination against underrepresented subgroups, the proposed model is fairer than the baseline over all metrics considering race and sex as protected attributes, which extends to age for a more relaxed privacy guarantee.
KW - CT Contrast Detection
KW - Fairness
KW - Privacy
UR - http://www.scopus.com/inward/record.url?scp=85207654738&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-72787-0_4
DO - 10.1007/978-3-031-72787-0_4
M3 - Conference contribution
AN - SCOPUS:85207654738
SN - 9783031727863
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 34
EP - 45
BT - Ethics and Fairness in Medical Imaging - 2nd International Workshop on Fairness of AI in Medical Imaging, FAIMI 2024, and 3rd International Workshop on Ethical and Philosophical Issues in Medical Imaging, EPIMI 2024, Held in Conjunction with MICCAI 2024, Proceedings
A2 - Puyol-Antón, Esther
A2 - King, Andrew P.
A2 - Zamzmi, Ghada
A2 - Feragen, Aasa
A2 - Petersen, Eike
A2 - Cheplygina, Veronika
A2 - Ganz-Benjaminsen, Melanie
A2 - Ferrante, Enzo
A2 - Glocker, Ben
A2 - Rekik, Islem
A2 - Baxter, John S. H.
A2 - Eagleson, Roy
PB - Springer Science and Business Media Deutschland GmbH
T2 - 2nd International Workshop on Fairness of AI in Medical Imaging, FAIMI 2024, and 3rd International Workshop on Ethical and Philosophical Issues in Medical Imaging, EPIMI 2024, Held in Conjunction with the International Conference on Medical Image Computing and Computer Assisted Interventions, MICCAI 2024
Y2 - 6 October 2024 through 10 October 2024
ER -