TY - GEN
T1 - Multiperspective teaching of unknown objects via shared-gaze-based multimodal human-robot interaction
AU - Weber, Daniel
AU - Fuhl, Wolfgang
AU - Kasneci, Enkelejda
AU - Zell, Andreas
N1 - Publisher Copyright:
© 2023 Association for Computing Machinery.
PY - 2023/3/13
Y1 - 2023/3/13
N2 - For successful deployment of robots in multifaceted situations, an understanding of the robot for its environment is indispensable. With advancing performance of state-of-the-art object detectors, the capability of robots to detect objects within their interaction domain is also enhancing. However, it binds the robot to a few trained classes and prevents it from adapting to unfamiliar surroundings beyond predefned scenarios. In such scenarios, humans could assist robots amidst the overwhelming number of interaction entities and impart the requisite expertise by acting as teachers.We propose a novel pipeline that efectively harnesses human gaze and augmented reality in a human-robot collaboration context to teach a robot novel objects in its surrounding environment. By intertwining gaze (to guide the robot's attention to an object of interest) with augmented reality (to convey the respective class information) we enable the robot to quickly acquire a signifcant amount of automatically labeled training data on its own. Training in a transfer learning fashion, we demonstrate the robot's capability to detect recently learned objects and evaluate the infuence of diferent machine learning models and learning procedures as well as the amount of training data involved. Our multimodal approach proves to be an efcient and natural way to teach the robot novel objects based on a few instances and allows it to detect classes for which no training dataset is available. In addition, we make our dataset publicly available to the research community, which consists of RGB and depth data, intrinsic and extrinsic camera parameters, along with regions of interest.
AB - For successful deployment of robots in multifaceted situations, an understanding of the robot for its environment is indispensable. With advancing performance of state-of-the-art object detectors, the capability of robots to detect objects within their interaction domain is also enhancing. However, it binds the robot to a few trained classes and prevents it from adapting to unfamiliar surroundings beyond predefned scenarios. In such scenarios, humans could assist robots amidst the overwhelming number of interaction entities and impart the requisite expertise by acting as teachers.We propose a novel pipeline that efectively harnesses human gaze and augmented reality in a human-robot collaboration context to teach a robot novel objects in its surrounding environment. By intertwining gaze (to guide the robot's attention to an object of interest) with augmented reality (to convey the respective class information) we enable the robot to quickly acquire a signifcant amount of automatically labeled training data on its own. Training in a transfer learning fashion, we demonstrate the robot's capability to detect recently learned objects and evaluate the infuence of diferent machine learning models and learning procedures as well as the amount of training data involved. Our multimodal approach proves to be an efcient and natural way to teach the robot novel objects based on a few instances and allows it to detect classes for which no training dataset is available. In addition, we make our dataset publicly available to the research community, which consists of RGB and depth data, intrinsic and extrinsic camera parameters, along with regions of interest.
KW - augmented reality
KW - dataset
KW - eye tracking
KW - gaze
KW - human-robot interaction
KW - multimodal interaction
KW - shared attention
KW - teaching
KW - unknown object detection
UR - https://www.scopus.com/pages/publications/85150367246
U2 - 10.1145/3568162.3578627
DO - 10.1145/3568162.3578627
M3 - Conference contribution
AN - SCOPUS:85150367246
T3 - ACM/IEEE International Conference on Human-Robot Interaction
SP - 544
EP - 553
BT - HRI 2023 - Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction
PB - IEEE Computer Society
T2 - 18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023
Y2 - 13 March 2023 through 16 March 2023
ER -