TY - GEN
T1 - Evaluation of Randomized Input Sampling for Explanation (RISE) for 3D XAI - Proof of Concept for Black-Box Brain-Hemorrhage Classification
AU - Highton, Jack
AU - Chong, Quok Zong
AU - Crawley, Richard
AU - Schnabel, Julia A.
AU - Bhatia, Kanwal K.
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.
PY - 2024
Y1 - 2024
N2 - An increasing number of AI products for medical imaging solutions are offered to healthcare organizations, but frequently these are considered to be a ‘black-box’, offering only limited insights into the AI model functionality. Therefore, model-agnostic methods are required to provide Explainable AI (XAI) in order to improve clinicians’ trust and thus accelerate adoption. However, there is a current lack of published methods to explain 3D classification models with systematic evaluation for medical imaging applications. Here, the popular explainability method RISE is modified so that, for the first time to the best of our knowledge, it can be applied to 3D medical image classification. The method was assessed using recently proposed guidelines for clinical explainable AI. When different parameters were tested using a 3D CT dataset and a classifier to detect the presence of brain hemorrhage, we found that combining different algorithms to produce 3D occlusion patterns led to better and more reliable explainability results. This was confirmed using both quantitative metrics and interpretability assessment of the 3D saliency heatmaps by a clinical expert.
AB - An increasing number of AI products for medical imaging solutions are offered to healthcare organizations, but frequently these are considered to be a ‘black-box’, offering only limited insights into the AI model functionality. Therefore, model-agnostic methods are required to provide Explainable AI (XAI) in order to improve clinicians’ trust and thus accelerate adoption. However, there is a current lack of published methods to explain 3D classification models with systematic evaluation for medical imaging applications. Here, the popular explainability method RISE is modified so that, for the first time to the best of our knowledge, it can be applied to 3D medical image classification. The method was assessed using recently proposed guidelines for clinical explainable AI. When different parameters were tested using a 3D CT dataset and a classifier to detect the presence of brain hemorrhage, we found that combining different algorithms to produce 3D occlusion patterns led to better and more reliable explainability results. This was confirmed using both quantitative metrics and interpretability assessment of the 3D saliency heatmaps by a clinical expert.
KW - Explainable AI
KW - Hemorrhage Classification
KW - RISE
UR - http://www.scopus.com/inward/record.url?scp=85188712512&partnerID=8YFLogxK
U2 - 10.1007/978-981-97-1335-6_4
DO - 10.1007/978-981-97-1335-6_4
M3 - Conference contribution
AN - SCOPUS:85188712512
SN - 9789819713349
T3 - Lecture Notes in Electrical Engineering
SP - 41
EP - 51
BT - Proceedings of 2023 International Conference on Medical Imaging and Computer-Aided Diagnosis (MICAD 2023) - Medical Imaging and Computer-Aided Diagnosis
A2 - Su, Ruidan
A2 - Zhang, Yu-Dong
A2 - Frangi, Alejandro F.
PB - Springer Science and Business Media Deutschland GmbH
T2 - International Conference on Medical Imaging and Computer-Aided Diagnosis, MICAD 2023
Y2 - 9 December 2023 through 10 December 2023
ER -