TY - JOUR
T1 - Can You Read My Face?
T2 - A Methodological Variation for Assessing Facial Expressions of Robotic Heads
AU - Mirnig, Nicole
AU - Strasser, Ewald
AU - Weiss, Astrid
AU - Kühnlenz, Barbara
AU - Wollherr, Dirk
AU - Tscheligi, Manfred
N1 - Publisher Copyright:
© 2014, Springer Science+Business Media Dordrecht.
PY - 2015/2
Y1 - 2015/2
N2 - Our paper reports about an online study on robot facial expressions. On the one hand, we performed this study to assess the quality of the current facial expressions of two robot heads. On the other hand, we aimed at developing a simple, easy-to-use methodological variation to evaluate facial expressions of robotic heads. Short movie clips of two different robot heads showing a happy, sad, surprised, and neutral facial expression were compiled into an online survey, to examine how people interpret these expressions. Additionally, we added a control condition with a human face showing the same four emotions. The results showed that the facial expressions could be recognized well for both heads. Even the blender emotion surprised was recognized, although it resulted in positive and negative connotations. These results underline the importance of the situational context to correctly interpret emotional facial expressions. Besides the expected finding that the human is perceived significantly more anthropomorphic and animate than both robot heads, the more human-like designed robot head was rated significantly higher with respect to anthropomorphism than the robot head using animal-like features. In terms of the validation procedure, we could provide evidence for a feasible two-step procedure. By assessing the participants’ dispositional empathy with a questionnaire it can be ensured that they are in general able to decode facial expressions into the corresponding emotion. In subsequence, robot facial expressions can be validated with a closed-question approach.
AB - Our paper reports about an online study on robot facial expressions. On the one hand, we performed this study to assess the quality of the current facial expressions of two robot heads. On the other hand, we aimed at developing a simple, easy-to-use methodological variation to evaluate facial expressions of robotic heads. Short movie clips of two different robot heads showing a happy, sad, surprised, and neutral facial expression were compiled into an online survey, to examine how people interpret these expressions. Additionally, we added a control condition with a human face showing the same four emotions. The results showed that the facial expressions could be recognized well for both heads. Even the blender emotion surprised was recognized, although it resulted in positive and negative connotations. These results underline the importance of the situational context to correctly interpret emotional facial expressions. Besides the expected finding that the human is perceived significantly more anthropomorphic and animate than both robot heads, the more human-like designed robot head was rated significantly higher with respect to anthropomorphism than the robot head using animal-like features. In terms of the validation procedure, we could provide evidence for a feasible two-step procedure. By assessing the participants’ dispositional empathy with a questionnaire it can be ensured that they are in general able to decode facial expressions into the corresponding emotion. In subsequence, robot facial expressions can be validated with a closed-question approach.
KW - Facial expressions
KW - Human-robot interaction
KW - Robot emotions
KW - Social robots
UR - http://www.scopus.com/inward/record.url?scp=84924308183&partnerID=8YFLogxK
U2 - 10.1007/s12369-014-0261-z
DO - 10.1007/s12369-014-0261-z
M3 - Article
AN - SCOPUS:84924308183
SN - 1875-4791
VL - 7
SP - 63
EP - 76
JO - International Journal of Social Robotics
JF - International Journal of Social Robotics
IS - 1
ER -