Abstract
Most current semantic segmentation approaches rely on deep convolutional neural networks (CNNs). However, their use of convolution operations with local receptive fields causes failures in modeling contextual spatial relations. Prior works have tried to address this issue by using graphical models or spatial propagation modules in networks. But such models often fail to capture long-range spatial relationships between entities, which leads to spatially fragmented predictions. In this work, we introduce a simple yet effective network unit, the spatial relation module, to learn and reason about global relationships between any two spatial positions, and then produce relation-enhanced feature representations. The spatial relation module is general and extensible, and can be used in a plug-and-play fashion with the existing fully convolutional network (FCN) framework. We evaluate spatial relation module-equipped networks on semantic segmentation tasks using two aerial image datasets. The networks achieve very competitive results, bringing significant improvements over baselines.
Original language | English |
---|---|
Pages | 5232-5235 |
Number of pages | 4 |
DOIs | |
State | Published - 2019 |
Event | 39th IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2019 - Yokohama, Japan Duration: 28 Jul 2019 → 2 Aug 2019 |
Conference
Conference | 39th IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2019 |
---|---|
Country/Territory | Japan |
City | Yokohama |
Period | 28/07/19 → 2/08/19 |
Keywords
- Aerial imagery
- Fully convolutional network
- Relation network
- Semantic segmentation