Explaining COVID-19 and Thoracic Pathology Model Predictions by Identifying Informative Input Features

Ashkan Khakzar, Yang Zhang, Wejdene Mansour, Yuezhi Cai, Yawei Li, Yucheng Zhang, Seong Tae Kim, Nassir Navab

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

6 Scopus citations

Abstract

Neural networks have demonstrated remarkable performance in classification and regression tasks on chest X-rays. In order to establish trust in the clinical routine, the networks’ prediction mechanism needs to be interpretable. One principal approach to interpretation is feature attribution. Feature attribution methods identify the importance of input features for the output prediction. Building on Information Bottleneck Attribution (IBA) method, for each prediction we identify the chest X-ray regions that have high mutual information with the network’s output. Original IBA identifies input regions that have sufficient predictive information. We propose Inverse IBA to identify all informative regions. Thus all predictive cues for pathologies are highlighted on the X-rays, a desirable property for chest X-ray diagnosis. Moreover, we propose Regression IBA for explaining regression models. Using Regression IBA we observe that a model trained on cumulative severity score labels implicitly learns the severity of different X-ray regions. Finally, we propose Multi-layer IBA to generate higher resolution and more detailed attribution/saliency maps. We evaluate our methods using both human-centric (ground-truth-based) interpretability metrics, and human-agnostic feature importance metrics on NIH Chest X-ray8 and BrixIA datasets. The code (https://github.com/CAMP-eXplain-AI/CheXplain-IBA ) is publicly available.

Original languageEnglish
Title of host publicationMedical Image Computing and Computer Assisted Intervention – MICCAI 2021 - 24th International Conference, Proceedings
EditorsMarleen de Bruijne, Philippe C. Cattin, Stéphane Cotin, Nicolas Padoy, Stefanie Speidel, Yefeng Zheng, Caroline Essert
PublisherSpringer Science and Business Media Deutschland GmbH
Pages391-401
Number of pages11
ISBN (Print)9783030871987
DOIs
StatePublished - 2021
Event24th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2021 - Virtual, Online
Duration: 27 Sep 20211 Oct 2021

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12903 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference24th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2021
CityVirtual, Online
Period27/09/211/10/21

Keywords

  • Chest X-rays
  • Covid
  • Explainable AI
  • Feature attribution

Fingerprint

Dive into the research topics of 'Explaining COVID-19 and Thoracic Pathology Model Predictions by Identifying Informative Input Features'. Together they form a unique fingerprint.

Cite this