TY - GEN
T1 - Real-time framework for multimodal human-robot interaction
AU - Gast, Jürgen
AU - Bannat, Alexander
AU - Rehrl, Tobias
AU - Wallhoff, Frank
AU - Rigoll, Gerhard
AU - Wendt, Cornelia
AU - Schmidt, Sabrina
AU - Popp, Michael
AU - Färber, Berthold
PY - 2009
Y1 - 2009
N2 - This paper presents a new framework for multimodal data processing in real-time. This framework comprises modules for different input and output signals and was designed for human-human or human-robot interaction scenarios. Single modules for the recording of selected channels like speech, gestures or mimics can be combined with different output options (i.e. robot reactions) in a highly flexible manner. Depending on the included modules, online as well as offline data processing is possible. This framework was used to analyze human-human interaction to gain insights on important factors and their dynamics. Recorded data comprises speech, facial expressions, gestures and physiological data. This naturally produced data was annotated and labeled in order to train recognition modules which will be integrated into the existing framework. The overall aim is to create a system that is able to recognize and react to those parameters that humans take into account during interaction. In this paper, the technical implementation and application in a human-human and a human-robot interaction scenario is presented.
AB - This paper presents a new framework for multimodal data processing in real-time. This framework comprises modules for different input and output signals and was designed for human-human or human-robot interaction scenarios. Single modules for the recording of selected channels like speech, gestures or mimics can be combined with different output options (i.e. robot reactions) in a highly flexible manner. Depending on the included modules, online as well as offline data processing is possible. This framework was used to analyze human-human interaction to gain insights on important factors and their dynamics. Recorded data comprises speech, facial expressions, gestures and physiological data. This naturally produced data was annotated and labeled in order to train recognition modules which will be integrated into the existing framework. The overall aim is to create a system that is able to recognize and react to those parameters that humans take into account during interaction. In this paper, the technical implementation and application in a human-human and a human-robot interaction scenario is presented.
UR - http://www.scopus.com/inward/record.url?scp=70349985050&partnerID=8YFLogxK
U2 - 10.1109/HSI.2009.5090992
DO - 10.1109/HSI.2009.5090992
M3 - Conference contribution
AN - SCOPUS:70349985050
SN - 9781424439607
T3 - Proceedings - 2009 2nd Conference on Human System Interactions, HSI '09
SP - 276
EP - 283
BT - Proceedings - 2009 2nd Conference on Human System Interactions, HSI '09
T2 - 2009 2nd Conference on Human System Interactions, HSI '09
Y2 - 21 May 2009 through 23 May 2009
ER -