Virtual sensors for human concepts-Building detection by an outdoor mobile robot

Martin Persson, Tom Duckett, Achim Lilienthal

Publikation: Beitrag in FachzeitschriftArtikelBegutachtung

3 Zitate (Scopus)


In human-robot communication it is often important to relate robot sensor readings to concepts used by humans. We suggest the use of a virtual sensor (one or several physical sensors with a dedicated signal processing unit for the recognition of real world concepts) and a method with which the virtual sensor can learn from a set of generic features. The virtual sensor robustly establishes the link between sensor data and a particular human concept. In this work, we present a virtual sensor for building detection that uses vision and machine learning to classify the image content in a particular direction as representing buildings or non-buildings. The virtual sensor is trained on a diverse set of image data, using features extracted from grey level images. The features are based on edge orientation, the configurations of these edges, and on grey level clustering. To combine these features, the AdaBoost algorithm is applied. Our experiments with an outdoor mobile robot show that the method is able to separate buildings from nature with a high classification rate, and to extrapolate well to images collected under different conditions. Finally, the virtual sensor is applied on the mobile robot, combining its classifications of sub-images from a panoramic view with spatial information (in the form of location and orientation of the robot) in order to communicate the likely locations of buildings to a remote human operator.

Seiten (von - bis)383-390
FachzeitschriftRobotics and Autonomous Systems
PublikationsstatusVeröffentlicht - 31 Mai 2007
Extern publiziertJa


Untersuchen Sie die Forschungsthemen von „Virtual sensors for human concepts-Building detection by an outdoor mobile robot“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren