TY - JOUR
T1 - Autonomous behavior-based switched top-down and bottom-up visual attention for mobile robots
AU - Xu, Tingting
AU - Kühnlenz, Kolja
AU - Buss, Martin
N1 - Funding Information:
Manuscript received January 25, 2010; revised May 31, 2010; accepted July 25, 2010. Date of publication August 26, 2010; date of current version September 27, 2010. This paper was recommended for publication by Associate Editor T. Kanda and Editor G. Oriolo upon evaluation of the reviewers’ comments. This work was supported in part by the German Research Foundation (DFG) Excellence Initiative Research Cluster Cognition for Technical Systems (CoTeSys) (www.cotesys.org) and in part by the Institute for Advanced Study, Technische Universität München (www.tum-ias.de).
PY - 2010/10
Y1 - 2010/10
N2 - In this paper, autonomous switching between two basic attention selection mechanisms, i.e., top-down and bottom-up, is proposed. This approach fills a gap in object search using conventional top-down biased bottom-up attention selection, which fails, if a group of objects is searched whose appearances cannot be uniquely described by low-level features used in bottom-up computational models. Three internal robot states, such as observing, operating, and exploring, are included to determine the visual selection behavior. A vision-guided mobile robot equipped with an active stereo camera is used to demonstrate our strategy and evaluate the performance experimentally. This approach facilitates adaptations of visual behavior to different internal robot states and benefits further development toward cognitive visual perception in the robotics domain.
AB - In this paper, autonomous switching between two basic attention selection mechanisms, i.e., top-down and bottom-up, is proposed. This approach fills a gap in object search using conventional top-down biased bottom-up attention selection, which fails, if a group of objects is searched whose appearances cannot be uniquely described by low-level features used in bottom-up computational models. Three internal robot states, such as observing, operating, and exploring, are included to determine the visual selection behavior. A vision-guided mobile robot equipped with an active stereo camera is used to demonstrate our strategy and evaluate the performance experimentally. This approach facilitates adaptations of visual behavior to different internal robot states and benefits further development toward cognitive visual perception in the robotics domain.
KW - Vision-guided robotics
KW - visual attention control
UR - http://www.scopus.com/inward/record.url?scp=77957717664&partnerID=8YFLogxK
U2 - 10.1109/TRO.2010.2062571
DO - 10.1109/TRO.2010.2062571
M3 - Article
AN - SCOPUS:77957717664
SN - 1552-3098
VL - 26
SP - 947
EP - 954
JO - IEEE Transactions on Robotics
JF - IEEE Transactions on Robotics
IS - 5
M1 - 5557826
ER -