Spatial relational reasoning in networks for improving semantic segmentation of aerial images

Lichao Mou, Yuansheng Hua, Xiao Xiang Zhu

Research output: Contribution to conferencePaperpeer-review

8 Scopus citations

Abstract

Most current semantic segmentation approaches rely on deep convolutional neural networks (CNNs). However, their use of convolution operations with local receptive fields causes failures in modeling contextual spatial relations. Prior works have tried to address this issue by using graphical models or spatial propagation modules in networks. But such models often fail to capture long-range spatial relationships between entities, which leads to spatially fragmented predictions. In this work, we introduce a simple yet effective network unit, the spatial relation module, to learn and reason about global relationships between any two spatial positions, and then produce relation-enhanced feature representations. The spatial relation module is general and extensible, and can be used in a plug-and-play fashion with the existing fully convolutional network (FCN) framework. We evaluate spatial relation module-equipped networks on semantic segmentation tasks using two aerial image datasets. The networks achieve very competitive results, bringing significant improvements over baselines.

Original languageEnglish
Pages5232-5235
Number of pages4
DOIs
StatePublished - 2019
Event39th IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2019 - Yokohama, Japan
Duration: 28 Jul 20192 Aug 2019

Conference

Conference39th IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2019
Country/TerritoryJapan
CityYokohama
Period28/07/192/08/19

Keywords

  • Aerial imagery
  • Fully convolutional network
  • Relation network
  • Semantic segmentation

Fingerprint

Dive into the research topics of 'Spatial relational reasoning in networks for improving semantic segmentation of aerial images'. Together they form a unique fingerprint.

Cite this