Beyond pixels: Learning from multimodal hyperspectral superpixels for land cover classification

Dan Feng Hong, Xin Wu, Jing Yao, Xiao Xiang Zhu

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Despite tons of advanced classification models that have recently been developed for the land cover mapping task, the monotonicity of a single remote sensing data source, such as only using hyperspectral data or multispectral data, hinders the classification accuracy from being further improved and tends to meet the performance bottleneck. For this reason, we develop a novel superpixel-based subspace learning model, called Supace, by jointly learning multimodal feature representations from HS and MS superpixels for more accurate LCC results. Supace can learn a common subspace across multimodal RS data, where the diverse and complementary information from different modalities can be better combined, being capable of enhancing the discriminative ability of to-be-learned features in a more effective way. To better capture semantic information of objects in the feature learning process, superpixels that beyond pixels are regarded as the study object in our Supace for LCC. Extensive experiments have been conducted on two popular hyperspectral and multispectral datasets, demonstrating the superiority of the proposed Supace in the land cover classification task compared with several well-known baselines related to multimodal remote sensing image feature learning.

Original languageEnglish
Pages (from-to)802-808
Number of pages7
JournalScience China Technological Sciences
Volume65
Issue number4
DOIs
StatePublished - Apr 2022

Keywords

  • classification
  • hyperspectral image
  • land cover
  • multimodal
  • multispectral image
  • remote sensing
  • subspace learning
  • superpixels

Fingerprint

Dive into the research topics of 'Beyond pixels: Learning from multimodal hyperspectral superpixels for land cover classification'. Together they form a unique fingerprint.

Cite this