Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping

Martin Persson, Tom Duckett, Achim J. Lilienthal

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omni-directional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground.

Original languageEnglish
Pages (from-to)483-492
Number of pages10
JournalRobotics and Autonomous Systems
Volume56
Issue number6
DOIs
StatePublished - 30 Jun 2008
Externally publishedYes

Keywords

  • Aerial image
  • Mobile robot
  • Semantic mapping
  • Semi-supervised learning

Fingerprint

Dive into the research topics of 'Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping'. Together they form a unique fingerprint.

Cite this