Self-supervised audiovisual representation learning for remote sensing data

Konrad Heidler, Lichao Mou, Di Hu, Pu Jin, Guangyao Li, Chuang Gan, Ji Rong Wen, Xiao Xiang Zhu

Research output: Contribution to journalReview articlepeer-review

32 Scopus citations

Abstract

Many deep learning approaches make extensive use of backbone networks pretrained on large datasets like ImageNet, which are then fine-tuned. In remote sensing, the lack of comparable large annotated datasets and the diversity of sensing platforms impedes similar developments. In order to contribute towards the availability of pretrained backbone networks in remote sensing, we devise a self-supervised approach for pretraining deep neural networks. By exploiting the correspondence between co-located imagery and audio recordings, this is done completely label-free, without the need for manual annotation. For this purpose, we introduce the SoundingEarth dataset, which consists of co-located aerial imagery and crowd-sourced audio samples all around the world. Using this dataset, we then pretrain ResNet models to map samples from both modalities into a common embedding space, encouraging the models to understand key properties of a scene that influence both visual and auditory appearance. To validate the usefulness of the proposed approach, we evaluate the transfer learning performance of pretrained weights obtained against weights obtained through other means. By fine-tuning the models on a number of commonly used remote sensing datasets, we show that our approach outperforms existing pretraining strategies for remote sensing imagery. The dataset, code and pretrained model weights are available at https://github.com/khdlr/SoundingEarth.

Original languageEnglish
Article number103130
JournalInternational Journal of Applied Earth Observation and Geoinformation
Volume116
DOIs
StatePublished - Feb 2023

Keywords

  • Audiovisual dataset
  • Multi-modal learning
  • Representation learning
  • Self-supervised learning

Fingerprint

Dive into the research topics of 'Self-supervised audiovisual representation learning for remote sensing data'. Together they form a unique fingerprint.

Cite this