Multi-Label Guided Supervised Contrastive Learning for Earth Observation Pretraining

Yi Wang, Conrad M. Albrecht, Xiao Xiang Zhu

Research output: Contribution to conferencePaperpeer-review

Abstract

Pretraining foundation models on large-scale satellite imagery has raised great interest in Earth observation. While most pretraining is conducted purely self-supervised, many land cover land use products that provide free and global annotations tend to be overlooked. To bridge this gap, we propose to exploit land-cover-generated multi-label annotations to guide supervised contrastive learning for Earth observation. We match the SSL4EO-S12 dataset with Dynamic World land cover maps and integrate image-level multi-label annotations. During pretraining, the label similarities between different images are calculated, and those with high similarity scores are pulled together in the embedding space. Experimental results on classification and segmentation downstream tasks demonstrate the effectiveness of the proposed method.

Original languageEnglish
Pages7568-7571
Number of pages4
DOIs
StatePublished - 2024
Event2024 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2024 - Athens, Greece
Duration: 7 Jul 202412 Jul 2024

Conference

Conference2024 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2024
Country/TerritoryGreece
CityAthens
Period7/07/2412/07/24

Keywords

  • Earth observation
  • foundation models
  • pretraining
  • remote sensing
  • Self-supervised learning

Fingerprint

Dive into the research topics of 'Multi-Label Guided Supervised Contrastive Learning for Earth Observation Pretraining'. Together they form a unique fingerprint.

Cite this