Learning-Shared Cross-Modality Representation Using Multispectral-LiDAR and Hyperspectral Data

Danfeng Hong, Jocelyn Chanussot, Naoto Yokoya, Jian Kang, Xiao Xiang Zhu

Research output: Contribution to journalArticlepeer-review

54 Scopus citations

Abstract

Due to the ever-growing diversity of the data source, multimodality feature learning has attracted more and more attention. However, most of these methods are designed by jointly learning feature representation from multimodalities that exist in both training and test sets, yet they are less investigated in the absence of certain modality in the test phase. To this end, in this letter, we propose to learn a shared feature space across multimodalities in the training process. By this way, the out-of-sample from any of multimodalities can be directly projected onto the learned space for a more effective cross-modality representation. More significantly, the shared space is regarded as a latent subspace in our proposed method, which connects the original multimodal samples with label information to further improve the feature discrimination. Experiments are conducted on the multispectral-Light Detection and Ranging (LIDAR) and hyperspectral data set provided by the 2018 IEEE GRSS Data Fusion Contest to demonstrate the effectiveness and superiority of the proposed method in comparison with several popular baselines.

Original languageEnglish
Article number8976086
Pages (from-to)1470-1474
Number of pages5
JournalIEEE Geoscience and Remote Sensing Letters
Volume17
Issue number8
DOIs
StatePublished - Aug 2020
Externally publishedYes

Keywords

  • Cross-modality
  • feature learning
  • hyperspectral
  • multimodality
  • multispectral-Light Detection and Ranging (LIDAR)
  • shared subspace learning

Fingerprint

Dive into the research topics of 'Learning-Shared Cross-Modality Representation Using Multispectral-LiDAR and Hyperspectral Data'. Together they form a unique fingerprint.

Cite this