TY - GEN
T1 - Graphical models for real-time capable gesture recognition
AU - Rehrl, T.
AU - Theibing, N.
AU - Bannat, A.
AU - Gast, J.
AU - Arsić, D.
AU - Wallhoff, F.
AU - Rigoll, G.
AU - Mayer, C.
AU - Radig, B.
PY - 2010
Y1 - 2010
N2 - In everyday live head gestures such as head shaking or nodding and hand gestures like pointing gestures form important aspects of human-human interaction. Therefore, recent research considers integrating these intuitive communication cues into technical systems for improving and easing human-computer interaction. In this paper we present a vision-based system to recognize head gestures (nodding, shaking, neutral) and dynamic hand gestures (hand moving right/left/up/down, fist moving right/left) in real-time. The gestural input delivers a communication modality for a human-robot interaction scenario situated in an assistive household environment. The use of fast low-level image-feature extraction methods contributes to the real-time capability of the system and advanced classification approaches relying on Graphical Models provide high robustness. Graphical Models offer the possibility to group the input features in several sub-nodes resulting in a better classification than obtained via a traditional Hidden Markov Model classification. The applied grouping can regard interdependencies owing to, either physical constraints (like for the head gestures), or interrelations between shape and motion (like for the hand gestures).
AB - In everyday live head gestures such as head shaking or nodding and hand gestures like pointing gestures form important aspects of human-human interaction. Therefore, recent research considers integrating these intuitive communication cues into technical systems for improving and easing human-computer interaction. In this paper we present a vision-based system to recognize head gestures (nodding, shaking, neutral) and dynamic hand gestures (hand moving right/left/up/down, fist moving right/left) in real-time. The gestural input delivers a communication modality for a human-robot interaction scenario situated in an assistive household environment. The use of fast low-level image-feature extraction methods contributes to the real-time capability of the system and advanced classification approaches relying on Graphical Models provide high robustness. Graphical Models offer the possibility to group the input features in several sub-nodes resulting in a better classification than obtained via a traditional Hidden Markov Model classification. The applied grouping can regard interdependencies owing to, either physical constraints (like for the head gestures), or interrelations between shape and motion (like for the hand gestures).
KW - Gesture recognition
KW - Graphical models
KW - Real-time
UR - http://www.scopus.com/inward/record.url?scp=78651109572&partnerID=8YFLogxK
U2 - 10.1109/ICIP.2010.5651873
DO - 10.1109/ICIP.2010.5651873
M3 - Conference contribution
AN - SCOPUS:78651109572
SN - 9781424479948
T3 - Proceedings - International Conference on Image Processing, ICIP
SP - 2445
EP - 2448
BT - 2010 IEEE International Conference on Image Processing, ICIP 2010 - Proceedings
T2 - 2010 17th IEEE International Conference on Image Processing, ICIP 2010
Y2 - 26 September 2010 through 29 September 2010
ER -