TY - GEN
T1 - Pixel-Level Explanation of Multiple Instance Learning Models in Biomedical Single Cell Images
AU - Sadafi, Ario
AU - Adonkina, Oleksandra
AU - Khakzar, Ashkan
AU - Lienemann, Peter
AU - Hehr, Rudolf Matthias
AU - Rueckert, Daniel
AU - Navab, Nassir
AU - Marr, Carsten
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023
Y1 - 2023
N2 - Explainability is a key requirement for computer-aided diagnosis systems in clinical decision-making. Multiple instance learning with attention pooling provides instance-level explainability, however for many clinical applications a deeper, pixel-level explanation is desirable, but missing so far. In this work, we investigate the use of four attribution methods to explain a multiple instance learning models: GradCAM, Layer-Wise Relevance Propagation (LRP), Information Bottleneck Attribution (IBA), and InputIBA. With this collection of methods, we can derive pixel-level explanations on for the task of diagnosing blood cancer from patients’ blood smears. We study two datasets of acute myeloid leukemia with over 100 000 single cell images and observe how each attribution method performs on the multiple instance learning architecture focusing on different properties of the white blood single cells. Additionally, we compare attribution maps with the annotations of a medical expert to see how the model’s decision-making differs from the human standard. Our study addresses the challenge of implementing pixel-level explainability in multiple instance learning models and provides insights for clinicians to better understand and trust decisions from computer-aided diagnosis systems.
AB - Explainability is a key requirement for computer-aided diagnosis systems in clinical decision-making. Multiple instance learning with attention pooling provides instance-level explainability, however for many clinical applications a deeper, pixel-level explanation is desirable, but missing so far. In this work, we investigate the use of four attribution methods to explain a multiple instance learning models: GradCAM, Layer-Wise Relevance Propagation (LRP), Information Bottleneck Attribution (IBA), and InputIBA. With this collection of methods, we can derive pixel-level explanations on for the task of diagnosing blood cancer from patients’ blood smears. We study two datasets of acute myeloid leukemia with over 100 000 single cell images and observe how each attribution method performs on the multiple instance learning architecture focusing on different properties of the white blood single cells. Additionally, we compare attribution maps with the annotations of a medical expert to see how the model’s decision-making differs from the human standard. Our study addresses the challenge of implementing pixel-level explainability in multiple instance learning models and provides insights for clinicians to better understand and trust decisions from computer-aided diagnosis systems.
KW - Blood cancer cytology
KW - Multiple instance learning
KW - Pixel-level explainability
UR - http://www.scopus.com/inward/record.url?scp=85163954275&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-34048-2_14
DO - 10.1007/978-3-031-34048-2_14
M3 - Conference contribution
AN - SCOPUS:85163954275
SN - 9783031340475
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 170
EP - 182
BT - Information Processing in Medical Imaging - 28th International Conference, IPMI 2023, Proceedings
A2 - Frangi, Alejandro
A2 - de Bruijne, Marleen
A2 - Wassermann, Demian
A2 - Navab, Nassir
PB - Springer Science and Business Media Deutschland GmbH
T2 - 28th International Conference on Information Processing in Medical Imaging, IPMI 2023
Y2 - 18 June 2023 through 23 June 2023
ER -