TY - JOUR
T1 - Virtual sensors for human concepts-Building detection by an outdoor mobile robot
AU - Persson, Martin
AU - Duckett, Tom
AU - Lilienthal, Achim
N1 - Funding Information:
Martin Persson is supported by the Swedish Defence Material Administration.
PY - 2007/5/31
Y1 - 2007/5/31
N2 - In human-robot communication it is often important to relate robot sensor readings to concepts used by humans. We suggest the use of a virtual sensor (one or several physical sensors with a dedicated signal processing unit for the recognition of real world concepts) and a method with which the virtual sensor can learn from a set of generic features. The virtual sensor robustly establishes the link between sensor data and a particular human concept. In this work, we present a virtual sensor for building detection that uses vision and machine learning to classify the image content in a particular direction as representing buildings or non-buildings. The virtual sensor is trained on a diverse set of image data, using features extracted from grey level images. The features are based on edge orientation, the configurations of these edges, and on grey level clustering. To combine these features, the AdaBoost algorithm is applied. Our experiments with an outdoor mobile robot show that the method is able to separate buildings from nature with a high classification rate, and to extrapolate well to images collected under different conditions. Finally, the virtual sensor is applied on the mobile robot, combining its classifications of sub-images from a panoramic view with spatial information (in the form of location and orientation of the robot) in order to communicate the likely locations of buildings to a remote human operator.
AB - In human-robot communication it is often important to relate robot sensor readings to concepts used by humans. We suggest the use of a virtual sensor (one or several physical sensors with a dedicated signal processing unit for the recognition of real world concepts) and a method with which the virtual sensor can learn from a set of generic features. The virtual sensor robustly establishes the link between sensor data and a particular human concept. In this work, we present a virtual sensor for building detection that uses vision and machine learning to classify the image content in a particular direction as representing buildings or non-buildings. The virtual sensor is trained on a diverse set of image data, using features extracted from grey level images. The features are based on edge orientation, the configurations of these edges, and on grey level clustering. To combine these features, the AdaBoost algorithm is applied. Our experiments with an outdoor mobile robot show that the method is able to separate buildings from nature with a high classification rate, and to extrapolate well to images collected under different conditions. Finally, the virtual sensor is applied on the mobile robot, combining its classifications of sub-images from a panoramic view with spatial information (in the form of location and orientation of the robot) in order to communicate the likely locations of buildings to a remote human operator.
KW - AdaBoost
KW - Automatic building detection
KW - Human concepts
KW - Human-robot communication
KW - Virtual sensor
UR - http://www.scopus.com/inward/record.url?scp=34247125634&partnerID=8YFLogxK
U2 - 10.1016/j.robot.2006.12.002
DO - 10.1016/j.robot.2006.12.002
M3 - Article
AN - SCOPUS:34247125634
SN - 0921-8890
VL - 55
SP - 383
EP - 390
JO - Robotics and Autonomous Systems
JF - Robotics and Autonomous Systems
IS - 5
ER -