A co-learning method to utilize optical images and photogrammetric point clouds for building extraction

Yuxing Xie, Jiaojiao Tian, Xiao Xiang Zhu

Research output: Contribution to journalReview articlepeer-review

16 Scopus citations

Abstract

Although deep learning techniques have brought unprecedented accuracy to automatic building extraction, several main issues still constitute an obstacle to effective and practical applications. The industry is eager for higher accuracy and more flexible data usage. In this paper, we present a co-learning framework applicable to building extraction from optical images and photogrammetric point clouds, which can take the advantage of 2D/3D multimodality data. Instead of direct information fusion, our co-learning framework adaptively exploits knowledge from another modality during the training phase with a soft connection, via a predefined loss function. Compared to conventional data fusion, this method is more flexible, as it is not mandatory to provide multimodality data in the test phase. We propose two types of co-learning: a standard version and an enhanced version, depending on whether unlabeled training data are employed. Experimental results from two data sets show that the methods we present can enhance the performance of both image and point cloud networks in few-shot tasks, as well as image networks when applying fully labeled training data sets.

Original languageEnglish
Article number103165
JournalInternational Journal of Applied Earth Observation and Geoinformation
Volume116
DOIs
StatePublished - Feb 2023

Keywords

  • Building extraction
  • Co-learning
  • Multimodality learning
  • Multispectral images
  • Point clouds
  • Remote sensing

Fingerprint

Dive into the research topics of 'A co-learning method to utilize optical images and photogrammetric point clouds for building extraction'. Together they form a unique fingerprint.

Cite this