Abstract
This work is a first step towards an integration of multimodality with the aim to make efficient use of both human-like, and non-human-like feedback modalities in order to optimize proactive information retrieval from task-related Human-Robot Interaction (HRI) in human environments. The presented approach combines the human-like modalities speech and emotional facial mimicry with non-human-like modalities. The proposed non-human-like modalities are a screen displaying retrieved knowledge of the robot to the human and a pointer mounted above the robot head for pointing directions and referring to objects in shared visual space as an equivalent for arm and hand gestures. Initially, pre-interaction feedback is explored in an experiment investigating different approach behaviors in order to find socially acceptable trajectories to increase the success of interactions and thus efficiency of information retrieval. Secondly, pre-evaluated humanlike modalities are introduced. First results of a multimodal feedback study are presented in the context of the IURO project, 1 where a robot asks for its way to a predefined goal location.
Originalsprache | Englisch |
---|---|
Seiten (von - bis) | 313-326 |
Seitenumfang | 14 |
Fachzeitschrift | Journal of Advanced Computational Intelligence and Intelligent Informatics |
Jahrgang | 16 |
Ausgabenummer | 2 |
DOIs | |
Publikationsstatus | Veröffentlicht - März 2012 |