A CNN for the identification of corresponding patches in SAR and optical imagery of urban scenes

Lichao Mou, Michael Schmitt, Yuanyuan Wang, Xiao Xiang Zhu

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

54 Scopus citations

Abstract

In this paper we propose a convolutional neural network (CNN), which allows to identify corresponding patches of very high resolution (VHR) optical and SAR imagery of complex urban scenes. Instead of a siamese architecture as conventionally used in CNNs designed for image matching, we resort to a pseudo-siamese configuration with no interconnection between the two streams for SAR and optical imagery. The network is trained with automatically generated training data and does not resort to any hand-crafted features. First evaluations show that the network is able to predict corresponding patches with high accuracy, thus indicating great potential for further development to a generalized multi-sensor matching procedure.

Original languageEnglish
Title of host publication2017 Joint Urban Remote Sensing Event, JURSE 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781509058082
DOIs
StatePublished - 10 May 2017
Event2017 Joint Urban Remote Sensing Event, JURSE 2017 - Dubai, United Arab Emirates
Duration: 6 Mar 20178 Mar 2017

Publication series

Name2017 Joint Urban Remote Sensing Event, JURSE 2017

Conference

Conference2017 Joint Urban Remote Sensing Event, JURSE 2017
Country/TerritoryUnited Arab Emirates
CityDubai
Period6/03/178/03/17

Fingerprint

Dive into the research topics of 'A CNN for the identification of corresponding patches in SAR and optical imagery of urban scenes'. Together they form a unique fingerprint.

Cite this