TY - GEN
T1 - Self-Supervised Vision Transformers for Joint SAR-Optical Representation Learning
AU - Wang, Yi
AU - Albrecht, Conrad M.
AU - Zhu, Xiao Xiang
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Self-supervised learning (SSL) has attracted much interest in remote sensing and Earth observation due to its ability to learn task-agnostic representations without human annotation. While most of the existing SSL works in remote sensing utilize ConvNet backbones and focus on a single modality, we explore the potential of vision transformers (ViTs) for joint SAR-optical representation learning. Based on DINO, a state-of-the-art SSL algorithm that distills knowledge from two augmented views of an input image, we combine SAR and optical imagery by concatenating all channels to a unified input. Subsequently, we randomly mask out channels of one modality as a data augmentation strategy. While training, the model gets fed optical-only, SAR-only, and SAR-optical image pairs learning both inner-and intra-modality representations. Experimental results employing the BigEarthNet-MM dataset demonstrate the benefits of both, the ViT backbones and the proposed multimodal SSL algorithm DINO-MM.
AB - Self-supervised learning (SSL) has attracted much interest in remote sensing and Earth observation due to its ability to learn task-agnostic representations without human annotation. While most of the existing SSL works in remote sensing utilize ConvNet backbones and focus on a single modality, we explore the potential of vision transformers (ViTs) for joint SAR-optical representation learning. Based on DINO, a state-of-the-art SSL algorithm that distills knowledge from two augmented views of an input image, we combine SAR and optical imagery by concatenating all channels to a unified input. Subsequently, we randomly mask out channels of one modality as a data augmentation strategy. While training, the model gets fed optical-only, SAR-only, and SAR-optical image pairs learning both inner-and intra-modality representations. Experimental results employing the BigEarthNet-MM dataset demonstrate the benefits of both, the ViT backbones and the proposed multimodal SSL algorithm DINO-MM.
KW - Self-supervised learning
KW - multimodal representation learning
KW - vision transformer
UR - http://www.scopus.com/inward/record.url?scp=85140402969&partnerID=8YFLogxK
U2 - 10.1109/IGARSS46834.2022.9883983
DO - 10.1109/IGARSS46834.2022.9883983
M3 - Conference contribution
AN - SCOPUS:85140402969
T3 - International Geoscience and Remote Sensing Symposium (IGARSS)
SP - 139
EP - 142
BT - IGARSS 2022 - 2022 IEEE International Geoscience and Remote Sensing Symposium
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2022
Y2 - 17 July 2022 through 22 July 2022
ER -