TY - GEN
T1 - Multiple parallel vision-based recognition in a real-time framework for human-robot-interaction scenarios
AU - Rehrl, Tobias
AU - Bannat, Alexander
AU - Gast, Jürgen
AU - Wallhoff, Frank
AU - Rigoll, Gerhard
AU - Mayer, Christoph
AU - Riaz, Zadid
AU - Radig, Bernd
AU - Sosnowski, Stefan
AU - Kühnlenz, Kolja
PY - 2010
Y1 - 2010
N2 - Everyday human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, body pose and gestures, allowing humans to pass large amounts of information in short time. In contrast, traditional human-machine communication is often unintuitive and requires specifically trained personal. In this paper, we present a real-time capable framework that recognizes traditional visual human communication signals in order to establish a more intuitive human-machine interaction. Humans rely on the interaction partner's face for identification, which helps them to adapt to the interaction partner and utilize context information. Head gestures (head nodding and head shaking) are a convenient way to show agreement or disagreement. Facial expressions give evidence about the interaction partners' emotional state and hand gestures are a fast way of passing simple commands. The recognition of all interaction queues is performed in parallel, enabled by a shared memory implementation1.
AB - Everyday human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, body pose and gestures, allowing humans to pass large amounts of information in short time. In contrast, traditional human-machine communication is often unintuitive and requires specifically trained personal. In this paper, we present a real-time capable framework that recognizes traditional visual human communication signals in order to establish a more intuitive human-machine interaction. Humans rely on the interaction partner's face for identification, which helps them to adapt to the interaction partner and utilize context information. Head gestures (head nodding and head shaking) are a convenient way to show agreement or disagreement. Facial expressions give evidence about the interaction partners' emotional state and hand gestures are a fast way of passing simple commands. The recognition of all interaction queues is performed in parallel, enabled by a shared memory implementation1.
KW - Facial expressions
KW - Gesture recognition
KW - Human-robot interaction
KW - Real-time image processing
UR - http://www.scopus.com/inward/record.url?scp=77952191016&partnerID=8YFLogxK
U2 - 10.1109/ACHI.2010.44
DO - 10.1109/ACHI.2010.44
M3 - Conference contribution
AN - SCOPUS:77952191016
SN - 9780769539577
T3 - 3rd International Conference on Advances in Computer-Human Interactions, ACHI 2010
SP - 50
EP - 55
BT - 3rd International Conference on Advances in Computer-Human Interactions, ACHI 2010
T2 - 3rd International Conference on Advances in Computer-Human Interactions, ACHI 2010
Y2 - 10 February 2010 through 16 February 2010
ER -