Multi-modal deep learning for landform recognition

Lin Du, Xiong You, Ke Li, Liqiu Meng, Gong Cheng, Liyang Xiong, Guangxia Wang

Research output: Contribution to journalArticlepeer-review

68 Scopus citations

Abstract

Automatic landform recognition is considered to be one of the most important tools for landform classification and deepening our understanding of terrain morphology. This paper presents a multi-modal geomorphological data fusion framework which uses deep learning-based methods to improve the performance of landform recognition. It leverages a multi-channel geomorphological feature extraction network to generate different characteristics from multi-modal geomorphological data, such as shaded relief, DEM, and slope and then it harvests joint features via a multi-modal geomorphological feature fusion network in order to effectively represent landforms. A residual learning unit is used to mine deep correlations from visual and physical modality features to achieve the final landform representations. Finally, it employs three fully-connected layers and a softmax classifier to generate labels for each sample data. Experimental results indicate that this multi-modal data fusion-based algorithm obtains much better performance than conventional algorithms. The highest recognition rate was 90.28%, showing a great potential for landform recognition.

Original languageEnglish
Pages (from-to)63-75
Number of pages13
JournalISPRS Journal of Photogrammetry and Remote Sensing
Volume158
DOIs
StatePublished - Dec 2019

Keywords

  • Convolutional neural networks (CNN)
  • Deep learning
  • Landform recognition
  • Multi-modal geomorphological data fusion

Fingerprint

Dive into the research topics of 'Multi-modal deep learning for landform recognition'. Together they form a unique fingerprint.

Cite this