TY - GEN
T1 - Tailoring model-based techniques to facial expression interpretation
AU - Wimmer, Matthias
AU - Mayer, Christoph
AU - Pietzsch, Sylvia
AU - Radig, Bernd
PY - 2008
Y1 - 2008
N2 - Computers have been widely deployed to our daily lives, but human-computer interaction still lacks intuition. Researchers intend to resolve these shortcomings by augmenting traditional systems with human-like interaction capabilities. Knowledge about human emotion, behavior, and intention is necessary to construct convenient interaction mechanisms. Today, dedicated hardware often infers the emotional state from human body measures. Similar to humans interpreting facial expressions, our approach acquires video information using standard hardware that does not interfere with people to accomplish this task. It exploits model-based techniques that accurately localize facial features, seamlessly track them through image sequences, and finally interpret the visible information. We make use of state-of-the-art techniques and specifically adapt most of the components involved to this scenario, which provides high accuracy and real-time capability. We base our experimental evaluation on publicly available databases and compare its results to related approaches. Our proof-of-concept demonstrates the feasibility of our approach and shows promising for integration into various applications.
AB - Computers have been widely deployed to our daily lives, but human-computer interaction still lacks intuition. Researchers intend to resolve these shortcomings by augmenting traditional systems with human-like interaction capabilities. Knowledge about human emotion, behavior, and intention is necessary to construct convenient interaction mechanisms. Today, dedicated hardware often infers the emotional state from human body measures. Similar to humans interpreting facial expressions, our approach acquires video information using standard hardware that does not interfere with people to accomplish this task. It exploits model-based techniques that accurately localize facial features, seamlessly track them through image sequences, and finally interpret the visible information. We make use of state-of-the-art techniques and specifically adapt most of the components involved to this scenario, which provides high accuracy and real-time capability. We base our experimental evaluation on publicly available databases and compare its results to related approaches. Our proof-of-concept demonstrates the feasibility of our approach and shows promising for integration into various applications.
UR - http://www.scopus.com/inward/record.url?scp=47349116904&partnerID=8YFLogxK
U2 - 10.1109/ACHI.2008.7
DO - 10.1109/ACHI.2008.7
M3 - Conference contribution
AN - SCOPUS:47349116904
SN - 0769530869
SN - 9780769530864
T3 - Proceedings of the 1st International Conference on Advances in Computer-Human Interaction, ACHI 2008
SP - 303
EP - 308
BT - Proceedings of the 1st International Conference on Advances in Computer-Human Interaction, ACHI 2008
T2 - 1st International Conference on Advances in Computer-Human Interaction, ACHI 2008
Y2 - 10 February 2008 through 15 February 2008
ER -