Explaining the Effects of Clouds on Remote Sensing Scene Classification

Jakob Gawlikowski, Patrick Ebel, Michael Schmitt, Xiao Xiang Zhu

Research output: Contribution to journalArticlepeer-review

20 Scopus citations

Abstract

Most of Earth is covered by haze or clouds, impeding the constant monitoring of our planet. Preceding works have documented the detrimental effects of cloud coverage on remote sensing applications and proposed ways to approach this issue. However, up to now, little effort has been spent on understanding how exactly atmospheric disturbances impede the application of modern machine learning methods to Earth observation data. Specifically, we consider the effects of haze and cloud coverage on a scene classification task. We provide a thorough investigation of how classifiers trained on cloud-free data fail once they encounter noisy imagery-a common scenario encountered when deploying pretrained models for remote sensing to real use cases. We show how and why remote sensing scene classification suffers from cloud coverage. Based on a multistage analysis, including explainability approaches applied to the predictions, we work out four different types of effects that clouds have on scene prediction. The contribution of our work is to deepen the understanding of the effects of clouds on common remote sensing applications and consequently guide the development of more robust methods.

Original languageEnglish
Pages (from-to)9976-9986
Number of pages11
JournalIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Volume15
DOIs
StatePublished - 2022

Keywords

  • Classification
  • clouds
  • deep learning
  • explainability
  • remote sensing
  • robustness

Fingerprint

Dive into the research topics of 'Explaining the Effects of Clouds on Remote Sensing Scene Classification'. Together they form a unique fingerprint.

Cite this