Jungle-net: Using explainable machine learning to gain new insights into the appearance of wilderness in satellite imagery

T. Stomberg, I. Weber, M. Schmitt, R. Roscher

Research output: Contribution to journalConference articlepeer-review

9 Scopus citations

Abstract

Explainable machine learning has recently gained attention due to its contribution to understanding how a model works and why certain decisions are made. A so far less targeted goal, especially in remote sensing, is the derivation of new knowledge and scientific insights from observational data. In our paper, we propose an explainable machine learning approach to address the challenge that certain land cover classes such as wilderness are not well-defined in satellite imagery and can only be used with vague labels for mapping. Our approach consists of a combined U-Net and ResNet-18 that can perform scene classification while providing at the same time interpretable information with which we can derive new insights about classes. We show that our methodology allows us to deepen our understanding of what makes nature wild by automatically identifying simple concepts such as wasteland that semantically describes wilderness. It further quantifies a class's sensitivity with respect to a concept and uses it as an indicator for how well a concept describes the class.

Original languageEnglish
Pages (from-to)317-324
Number of pages8
JournalISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Volume5
Issue number3
DOIs
StatePublished - 17 Jun 2021
Externally publishedYes
Event24th ISPRS Congress on Imaging today, foreseeing tomorrow, Commission III - Nice, France
Duration: 5 Jul 20219 Jul 2021

Keywords

  • Deep neural networks
  • Explainability
  • Interpretability
  • Scene classification

Fingerprint

Dive into the research topics of 'Jungle-net: Using explainable machine learning to gain new insights into the appearance of wilderness in satellite imagery'. Together they form a unique fingerprint.

Cite this