TY - CHAP
T1 - Task Representation in Robots for Robust Coupling of Perception to Action in Dynamic Scenes
AU - Burschka, Darius
N1 - Publisher Copyright:
© 2020, Springer Nature Switzerland AG.
PY - 2020
Y1 - 2020
N2 - Most current perception systems are designed to represent static geometry of the environment and to monitor the execution of their tasks in 3D Cartesian representations. While this representation allows a human-readable definition of tasks in robotic systems and provides direct references to the static environment representation, it does not correspond to the native data format of many of the passive sensor systems. Additional calibration parameters are necessary to transform the sensor data into the Cartesian space. They decrease the robustness of the perception system making them sensitive to changes and errors. An example of an alternative coupling strategy for perception modules is the shift from look-then-move to visual servoing in grasping, where 3D task planning is replaced by a task definition defined directly in the image space. The errors and goals are represented here directly in sensor space. In addition, the spatial ordering of the information based on Cartesian data may lead to wrong prioritization of dynamic objects, e.g., not always the nearest objects are the prime collision candidates in the scene. We propose alternative ways how to represent task goals in robotic systems that are closer to the native sensor space and, therefore, that are more robust to errors. We present our initial ideas, how this task representations can be applied in manipulation and automotive domains.
AB - Most current perception systems are designed to represent static geometry of the environment and to monitor the execution of their tasks in 3D Cartesian representations. While this representation allows a human-readable definition of tasks in robotic systems and provides direct references to the static environment representation, it does not correspond to the native data format of many of the passive sensor systems. Additional calibration parameters are necessary to transform the sensor data into the Cartesian space. They decrease the robustness of the perception system making them sensitive to changes and errors. An example of an alternative coupling strategy for perception modules is the shift from look-then-move to visual servoing in grasping, where 3D task planning is replaced by a task definition defined directly in the image space. The errors and goals are represented here directly in sensor space. In addition, the spatial ordering of the information based on Cartesian data may lead to wrong prioritization of dynamic objects, e.g., not always the nearest objects are the prime collision candidates in the scene. We propose alternative ways how to represent task goals in robotic systems that are closer to the native sensor space and, therefore, that are more robust to errors. We present our initial ideas, how this task representations can be applied in manipulation and automotive domains.
UR - http://www.scopus.com/inward/record.url?scp=85107020174&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-28619-4_4
DO - 10.1007/978-3-030-28619-4_4
M3 - Chapter
AN - SCOPUS:85107020174
T3 - Springer Proceedings in Advanced Robotics
SP - 25
EP - 31
BT - Springer Proceedings in Advanced Robotics
PB - Springer Science and Business Media B.V.
ER -