TY - JOUR
T1 - NEMt
T2 - 6th Northern Lights Deep Learning Conference, NLDL 2025
AU - Møller, Bjørn
AU - Amiri, Sepideh
AU - Igel, Christian
AU - Wickstrøm, Kristoffer Knutsen
AU - Jenssen, Robert
AU - Keicher, Matthias
AU - Azampour, Mohammad Farid
AU - Navab, Nassir
AU - Ibragimov, Bulat
N1 - Publisher Copyright:
© NLDL 2025.All rights reserved.
PY - 2025
Y1 - 2025
N2 - A fundamental barrier to the adoption of AI systems in clinical practice is the insufficient transparency of AI decision-making. The field of Explainable Artificial Intelligence (XAI) seeks to provide human-interpretable explanations for a given AI model. The recently proposed Neural Explanation Mask (NEM) framework is the first XAI method to explain learned representations with high accuracy at real-time speed. NEM transforms a given differentiable model into a self-explaining system by augmenting it with a neural network-based explanation module. This module is trained in an unsupervised manner to output occlusion-based explanations for the original model. However, the current framework does not consider labels associated with the inputs. This makes it unsuitable for many important tasks in the medical domain that require explanations specific to particular output dimensions, such as pathology discovery, disease severity regression, and multi-label data classification. In this work, we address this issue by introducing a loss function for training explanation modules incorporating labels. It steers explanations toward target labels alongside an integrated smoothing operator, which reduces artifacts in the explanation masks. We validate the resulting Neural Explanation Masks with target labels (NEMt) framework on public databases of lung radiographs and skin images. The obtained results are superior to the state-of-the-art XAI methods in terms of explanation relevancy mass, complexity, and sparseness. Moreover, the explanation generation is several hundred times faster, allowing for real-time clinical applications. The code is publicly available at //github.com/baerminator/NEM T.
AB - A fundamental barrier to the adoption of AI systems in clinical practice is the insufficient transparency of AI decision-making. The field of Explainable Artificial Intelligence (XAI) seeks to provide human-interpretable explanations for a given AI model. The recently proposed Neural Explanation Mask (NEM) framework is the first XAI method to explain learned representations with high accuracy at real-time speed. NEM transforms a given differentiable model into a self-explaining system by augmenting it with a neural network-based explanation module. This module is trained in an unsupervised manner to output occlusion-based explanations for the original model. However, the current framework does not consider labels associated with the inputs. This makes it unsuitable for many important tasks in the medical domain that require explanations specific to particular output dimensions, such as pathology discovery, disease severity regression, and multi-label data classification. In this work, we address this issue by introducing a loss function for training explanation modules incorporating labels. It steers explanations toward target labels alongside an integrated smoothing operator, which reduces artifacts in the explanation masks. We validate the resulting Neural Explanation Masks with target labels (NEMt) framework on public databases of lung radiographs and skin images. The obtained results are superior to the state-of-the-art XAI methods in terms of explanation relevancy mass, complexity, and sparseness. Moreover, the explanation generation is several hundred times faster, allowing for real-time clinical applications. The code is publicly available at //github.com/baerminator/NEM T.
UR - http://www.scopus.com/inward/record.url?scp=85219163645&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85219163645
SN - 2640-3498
VL - 265
SP - 184
EP - 192
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
Y2 - 7 January 2025 through 9 January 2025
ER -