TY - GEN
T1 - Human-Inspired Audiovisual Inducement of Whole-Body Responses
AU - Bien, Seongjin
AU - Skerlj, Jon
AU - Thiel, Paul
AU - Eberle, Felix
AU - Trobinger, Mario
AU - Stolle, Christian
AU - Figueredo, Luis
AU - Sadeghian, Hamid
AU - Naceri, Abdeldjallil
AU - Haddadin, Sami
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - For service humanoid robots, it is crucial to design human-robot interaction behaviors that avoid eliciting negative responses from users. Drawing inspiration from the natural human response of turning attention when hearing one's own name, this paper introduces a system that enables users to effortlessly capture a robot's attention, fostering a more natural and intuitive interaction. The proposed system consists of an integrated audio-visual perception system and a sophisticated whole-body controller, effectively merging audio and vision data to stimulate comprehensive and coordinated robot motion during human robot interaction scenario. The system has been successfully implemented and tested on the service humanoid robot GARMI. The resultant whole-body motion emulates the natural human process of localizing sound and visual cues, enabling the robot to track and follow the user who initiated the interaction. Moreover, the system's capabilities have been extended to include an object-reaching use-case, effectively demonstrating the versatility of the whole-body controller. To enhance user-friendliness, we have incorporated a natural language command interface, allowing users to effortlessly control the activation of the proposed system while serving as an attention-switching mechanism. Moreover, the robot employs an intuitive audio-visual feedback mechanism, offering transparency about its current state to users. Lastly, the system's performance has been rigorously evaluated through a series of experiments, confirming its effectiveness and reliability.
AB - For service humanoid robots, it is crucial to design human-robot interaction behaviors that avoid eliciting negative responses from users. Drawing inspiration from the natural human response of turning attention when hearing one's own name, this paper introduces a system that enables users to effortlessly capture a robot's attention, fostering a more natural and intuitive interaction. The proposed system consists of an integrated audio-visual perception system and a sophisticated whole-body controller, effectively merging audio and vision data to stimulate comprehensive and coordinated robot motion during human robot interaction scenario. The system has been successfully implemented and tested on the service humanoid robot GARMI. The resultant whole-body motion emulates the natural human process of localizing sound and visual cues, enabling the robot to track and follow the user who initiated the interaction. Moreover, the system's capabilities have been extended to include an object-reaching use-case, effectively demonstrating the versatility of the whole-body controller. To enhance user-friendliness, we have incorporated a natural language command interface, allowing users to effortlessly control the activation of the proposed system while serving as an attention-switching mechanism. Moreover, the robot employs an intuitive audio-visual feedback mechanism, offering transparency about its current state to users. Lastly, the system's performance has been rigorously evaluated through a series of experiments, confirming its effectiveness and reliability.
UR - http://www.scopus.com/inward/record.url?scp=85182943416&partnerID=8YFLogxK
U2 - 10.1109/Humanoids57100.2023.10375229
DO - 10.1109/Humanoids57100.2023.10375229
M3 - Conference contribution
AN - SCOPUS:85182943416
T3 - IEEE-RAS International Conference on Humanoid Robots
BT - 2023 IEEE-RAS 22nd International Conference on Humanoid Robots, Humanoids 2023
PB - IEEE Computer Society
T2 - 22nd IEEE-RAS International Conference on Humanoid Robots, Humanoids 2023
Y2 - 12 December 2023 through 14 December 2023
ER -