Abstract
Pretraining foundation models on large-scale satellite imagery has raised great interest in Earth observation. While most pretraining is conducted purely self-supervised, many land cover land use products that provide free and global annotations tend to be overlooked. To bridge this gap, we propose to exploit land-cover-generated multi-label annotations to guide supervised contrastive learning for Earth observation. We match the SSL4EO-S12 dataset with Dynamic World land cover maps and integrate image-level multi-label annotations. During pretraining, the label similarities between different images are calculated, and those with high similarity scores are pulled together in the embedding space. Experimental results on classification and segmentation downstream tasks demonstrate the effectiveness of the proposed method.
Original language | English |
---|---|
Pages | 7568-7571 |
Number of pages | 4 |
DOIs | |
State | Published - 2024 |
Event | 2024 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2024 - Athens, Greece Duration: 7 Jul 2024 → 12 Jul 2024 |
Conference
Conference | 2024 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2024 |
---|---|
Country/Territory | Greece |
City | Athens |
Period | 7/07/24 → 12/07/24 |
Keywords
- Earth observation
- foundation models
- pretraining
- remote sensing
- Self-supervised learning