TY - JOUR
T1 - Self-supervised audiovisual representation learning for remote sensing data
AU - Heidler, Konrad
AU - Mou, Lichao
AU - Hu, Di
AU - Jin, Pu
AU - Li, Guangyao
AU - Gan, Chuang
AU - Wen, Ji Rong
AU - Zhu, Xiao Xiang
N1 - Publisher Copyright:
© 2022 The Authors
PY - 2023/2
Y1 - 2023/2
N2 - Many deep learning approaches make extensive use of backbone networks pretrained on large datasets like ImageNet, which are then fine-tuned. In remote sensing, the lack of comparable large annotated datasets and the diversity of sensing platforms impedes similar developments. In order to contribute towards the availability of pretrained backbone networks in remote sensing, we devise a self-supervised approach for pretraining deep neural networks. By exploiting the correspondence between co-located imagery and audio recordings, this is done completely label-free, without the need for manual annotation. For this purpose, we introduce the SoundingEarth dataset, which consists of co-located aerial imagery and crowd-sourced audio samples all around the world. Using this dataset, we then pretrain ResNet models to map samples from both modalities into a common embedding space, encouraging the models to understand key properties of a scene that influence both visual and auditory appearance. To validate the usefulness of the proposed approach, we evaluate the transfer learning performance of pretrained weights obtained against weights obtained through other means. By fine-tuning the models on a number of commonly used remote sensing datasets, we show that our approach outperforms existing pretraining strategies for remote sensing imagery. The dataset, code and pretrained model weights are available at https://github.com/khdlr/SoundingEarth.
AB - Many deep learning approaches make extensive use of backbone networks pretrained on large datasets like ImageNet, which are then fine-tuned. In remote sensing, the lack of comparable large annotated datasets and the diversity of sensing platforms impedes similar developments. In order to contribute towards the availability of pretrained backbone networks in remote sensing, we devise a self-supervised approach for pretraining deep neural networks. By exploiting the correspondence between co-located imagery and audio recordings, this is done completely label-free, without the need for manual annotation. For this purpose, we introduce the SoundingEarth dataset, which consists of co-located aerial imagery and crowd-sourced audio samples all around the world. Using this dataset, we then pretrain ResNet models to map samples from both modalities into a common embedding space, encouraging the models to understand key properties of a scene that influence both visual and auditory appearance. To validate the usefulness of the proposed approach, we evaluate the transfer learning performance of pretrained weights obtained against weights obtained through other means. By fine-tuning the models on a number of commonly used remote sensing datasets, we show that our approach outperforms existing pretraining strategies for remote sensing imagery. The dataset, code and pretrained model weights are available at https://github.com/khdlr/SoundingEarth.
KW - Audiovisual dataset
KW - Multi-modal learning
KW - Representation learning
KW - Self-supervised learning
UR - http://www.scopus.com/inward/record.url?scp=85143874315&partnerID=8YFLogxK
U2 - 10.1016/j.jag.2022.103130
DO - 10.1016/j.jag.2022.103130
M3 - Review article
AN - SCOPUS:85143874315
SN - 1569-8432
VL - 116
JO - International Journal of Applied Earth Observation and Geoinformation
JF - International Journal of Applied Earth Observation and Geoinformation
M1 - 103130
ER -