TY - GEN
T1 - Interpretable Vertebral Fracture Diagnosis
AU - Engstler, Paul
AU - Keicher, Matthias
AU - Schinz, David
AU - Mach, Kristina
AU - Gersing, Alexandra S.
AU - Foreman, Sarah C.
AU - Goller, Sophia S.
AU - Weissinger, Juergen
AU - Rischewski, Jon
AU - Dietrich, Anna Sophia
AU - Wiestler, Benedikt
AU - Kirschke, Jan S.
AU - Khakzar, Ashkan
AU - Navab, Nassir
N1 - Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - Do black-box neural network models learn clinically relevant features for fracture diagnosis? The answer not only establishes reliability, quenches scientific curiosity, but also leads to explainable and verbose findings that can assist the radiologists in the final and increase trust. This work identifies the concepts networks use for vertebral fracture diagnosis in CT images. This is achieved by associating concepts to neurons highly correlated with a specific diagnosis in the dataset. The concepts are either associated with neurons by radiologists pre-hoc or are visualized during a specific prediction and left for the user’s interpretation. We evaluate which concepts lead to correct diagnosis and which concepts lead to false positives. The proposed frameworks and analysis pave the way for reliable and explainable vertebral fracture diagnosis. The code is publicly available (https://github.com/CAMP-eXplain-AI/Interpretable-Vertebral-Fracture-Diagnosis ).
AB - Do black-box neural network models learn clinically relevant features for fracture diagnosis? The answer not only establishes reliability, quenches scientific curiosity, but also leads to explainable and verbose findings that can assist the radiologists in the final and increase trust. This work identifies the concepts networks use for vertebral fracture diagnosis in CT images. This is achieved by associating concepts to neurons highly correlated with a specific diagnosis in the dataset. The concepts are either associated with neurons by radiologists pre-hoc or are visualized during a specific prediction and left for the user’s interpretation. We evaluate which concepts lead to correct diagnosis and which concepts lead to false positives. The proposed frameworks and analysis pave the way for reliable and explainable vertebral fracture diagnosis. The code is publicly available (https://github.com/CAMP-eXplain-AI/Interpretable-Vertebral-Fracture-Diagnosis ).
KW - Interpretability
KW - Vertebral fracture diagnosis
UR - http://www.scopus.com/inward/record.url?scp=85141753917&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-17976-1_7
DO - 10.1007/978-3-031-17976-1_7
M3 - Conference contribution
AN - SCOPUS:85141753917
SN - 9783031179754
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 71
EP - 81
BT - Interpretability of Machine Intelligence in Medical Image Computing - 5th International Workshop, iMIMIC 2022, Held in Conjunction with MICCAI 2022, Proceedings
A2 - Reyes, Mauricio
A2 - Henriques Abreu, Pedro
A2 - Cardoso, Jaime
PB - Springer Science and Business Media Deutschland GmbH
T2 - 5th International Workshop on Interpretability of Machine Intelligence in Medical Image Computing, iMIMIC 2022, held in conjunction with the 25th International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2022
Y2 - 22 September 2022 through 22 September 2022
ER -