Global localization of 3D point clouds in building outline maps of urban outdoor environments

Christian Landsiedel, Dirk Wollherr

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

This paper presents a method to localize a robot in a global coordinate frame based on a sparse 2D map containing outlines of building and road network information and no location prior information. Its input is a single 3D laser scan of the surroundings of the robot. The approach extends the generic chamfer matching template matching technique from image processing by including visibility analysis in the cost function. Thus, the observed building planes are matched to the expected view of the corresponding map section instead of to the entire map, which makes a more accurate matching possible. Since this formulation operates on generic edge maps from visual sensors, the matching formulation can be expected to generalize to other input data, e.g., from monocular or stereo cameras. The method is evaluated on two large datasets collected in different real-world urban settings and compared to a baseline method from literature and to the standard chamfer matching approach, where it shows considerable performance benefits, as well as the feasibility of global localization based on sparse building outline data.

Original languageEnglish
Pages (from-to)429-441
Number of pages13
JournalInternational Journal of Intelligent Robotics and Applications
Volume1
Issue number4
DOIs
StatePublished - 1 Dec 2017

Keywords

  • Global localization
  • Hybrid mapping
  • Point clouds
  • Semantic mapping

Fingerprint

Dive into the research topics of 'Global localization of 3D point clouds in building outline maps of urban outdoor environments'. Together they form a unique fingerprint.

Cite this