Deep semantic segmentation of aerial imagery based on multi-modal data

Kaiqiang Chen, Kun Fu, Xian Sun, Michael Weinmann, Stefan Hinz, Boris Jutzi, Martin Weinmann

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

7 Scopus citations

Abstract

In this paper, we focus on the use of multi-modal data to achieve a semantic segmentation of aerial imagery. Thereby, the multi-modal data is composed of a true orthophoto, the Digital Surface Model (DSM) and further representations derived from these. Taking data of different modalities separately and in combination as input to a Residual Shuffling Convolutional Neural Network (RSCNN), we analyze their value for the classification task given with a benchmark dataset. The derived results reveal an improvement if different types of geometric features extracted from the DSM are used in addition to the true orthophoto.

Original languageEnglish
Title of host publication2018 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2018 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages6219-6222
Number of pages4
ISBN (Electronic)9781538671504
DOIs
StatePublished - 31 Oct 2018
Externally publishedYes
Event38th Annual IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2018 - Valencia, Spain
Duration: 22 Jul 201827 Jul 2018

Publication series

NameInternational Geoscience and Remote Sensing Symposium (IGARSS)
Volume2018-July

Conference

Conference38th Annual IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2018
Country/TerritorySpain
CityValencia
Period22/07/1827/07/18

Keywords

  • Aerial imagery
  • Deep learning
  • Multi-modal data
  • Semantic segmentation
  • Shuffling-CNN

Fingerprint

Dive into the research topics of 'Deep semantic segmentation of aerial imagery based on multi-modal data'. Together they form a unique fingerprint.

Cite this