TY - GEN
T1 - Probabilistic semantic mapping with a virtual sensor for building/nature detection
AU - Persson, Martin
AU - Duckett, Tom
AU - Valgren, Christoffer
AU - Lilienthal, Achim
PY - 2007
Y1 - 2007
N2 - In human-robot communication it is often important to relate robot sensor readings to concepts used by humans. We believe that access to semantic maps will make it possible for robots to better communicate information to a human operator and vice versa. The main contribution of this paper is a method that fuses data from different sensor modalities, range sensors and vision sensors are considered, to create a probabilistic semantic map of an outdoor environment. The method combines a learned virtual sensor (understood as one or several physical sensors with a dedicated signal processing unit for recognition of real world concepts) for building detection with a standard occupancy map. The virtual sensor is applied on a mobile robot, combining classifications of sub-images from a panoramic view with spatial information (location and orientation of the robot) giving the likely locations of buildings. This information is combined with an occupancy map to calculate a probabilistic semantic map. Our experiments with an outdoor mobile robot show that the method produces semantic maps with correct labeling and an evident distinction between 'building' objects from 'nature' objects.
AB - In human-robot communication it is often important to relate robot sensor readings to concepts used by humans. We believe that access to semantic maps will make it possible for robots to better communicate information to a human operator and vice versa. The main contribution of this paper is a method that fuses data from different sensor modalities, range sensors and vision sensors are considered, to create a probabilistic semantic map of an outdoor environment. The method combines a learned virtual sensor (understood as one or several physical sensors with a dedicated signal processing unit for recognition of real world concepts) for building detection with a standard occupancy map. The virtual sensor is applied on a mobile robot, combining classifications of sub-images from a panoramic view with spatial information (location and orientation of the robot) giving the likely locations of buildings. This information is combined with an occupancy map to calculate a probabilistic semantic map. Our experiments with an outdoor mobile robot show that the method produces semantic maps with correct labeling and an evident distinction between 'building' objects from 'nature' objects.
UR - http://www.scopus.com/inward/record.url?scp=34948902289&partnerID=8YFLogxK
U2 - 10.1109/CIRA.2007.382870
DO - 10.1109/CIRA.2007.382870
M3 - Conference contribution
AN - SCOPUS:34948902289
SN - 1424407907
SN - 9781424407903
T3 - Proceedings of the 2007 IEEE International Symposium on Computational Intelligence in Robotics and Automation, CIRA 2007
SP - 236
EP - 242
BT - Proceedings of the 2007 IEEE International Symposium on Computational Intelligence in Robotics and Automation, CIRA 2007
T2 - 2007 IEEE International Symposium on Computational Intelligence in Robotics and Automation, CIRA 2007
Y2 - 20 June 2007 through 23 June 2007
ER -