TY - JOUR
T1 - Deep Affect Prediction in-the-Wild
T2 - Aff-Wild Database and Challenge, Deep Architectures, and Beyond
AU - Kollias, Dimitrios
AU - Tzirakis, Panagiotis
AU - Nicolaou, Mihalis A.
AU - Papaioannou, Athanasios
AU - Zhao, Guoying
AU - Schuller, Björn
AU - Kotsia, Irene
AU - Zafeiriou, Stefanos
N1 - Publisher Copyright:
© 2019, The Author(s).
PY - 2019/6/1
Y1 - 2019/6/1
N2 - Automatic understanding of human affect using visual signals is of great importance in everyday human–machine interactions. Appraising human emotional states, behaviors and reactions displayed in real-world settings, can be accomplished using latent continuous dimensions (e.g., the circumplex model of affect). Valence (i.e., how positive or negative is an emotion) and arousal (i.e., power of the activation of the emotion) constitute popular and effective representations for affect. Nevertheless, the majority of collected datasets this far, although containing naturalistic emotional states, have been captured in highly controlled recording conditions. In this paper, we introduce the Aff-Wild benchmark for training and evaluating affect recognition algorithms. We also report on the results of the First Affect-in-the-wild Challenge (Aff-Wild Challenge) that was recently organized in conjunction with CVPR 2017 on the Aff-Wild database, and was the first ever challenge on the estimation of valence and arousal in-the-wild. Furthermore, we design and extensively train an end-to-end deep neural architecture which performs prediction of continuous emotion dimensions based on visual cues. The proposed deep learning architecture, AffWildNet, includes convolutional and recurrent neural network layers, exploiting the invariant properties of convolutional features, while also modeling temporal dynamics that arise in human behavior via the recurrent layers. The AffWildNet produced state-of-the-art results on the Aff-Wild Challenge. We then exploit the AffWild database for learning features, which can be used as priors for achieving best performances both for dimensional, as well as categorical emotion recognition, using the RECOLA, AFEW-VA and EmotiW 2017 datasets, compared to all other methods designed for the same goal. The database and emotion recognition models are available at http://ibug.doc.ic.ac.uk/resources/first-affect-wild-challenge.
AB - Automatic understanding of human affect using visual signals is of great importance in everyday human–machine interactions. Appraising human emotional states, behaviors and reactions displayed in real-world settings, can be accomplished using latent continuous dimensions (e.g., the circumplex model of affect). Valence (i.e., how positive or negative is an emotion) and arousal (i.e., power of the activation of the emotion) constitute popular and effective representations for affect. Nevertheless, the majority of collected datasets this far, although containing naturalistic emotional states, have been captured in highly controlled recording conditions. In this paper, we introduce the Aff-Wild benchmark for training and evaluating affect recognition algorithms. We also report on the results of the First Affect-in-the-wild Challenge (Aff-Wild Challenge) that was recently organized in conjunction with CVPR 2017 on the Aff-Wild database, and was the first ever challenge on the estimation of valence and arousal in-the-wild. Furthermore, we design and extensively train an end-to-end deep neural architecture which performs prediction of continuous emotion dimensions based on visual cues. The proposed deep learning architecture, AffWildNet, includes convolutional and recurrent neural network layers, exploiting the invariant properties of convolutional features, while also modeling temporal dynamics that arise in human behavior via the recurrent layers. The AffWildNet produced state-of-the-art results on the Aff-Wild Challenge. We then exploit the AffWild database for learning features, which can be used as priors for achieving best performances both for dimensional, as well as categorical emotion recognition, using the RECOLA, AFEW-VA and EmotiW 2017 datasets, compared to all other methods designed for the same goal. The database and emotion recognition models are available at http://ibug.doc.ic.ac.uk/resources/first-affect-wild-challenge.
KW - AFEW
KW - AFEW-VA
KW - Aff-Wild
KW - AffWildNet
KW - Arousal
KW - Categorical
KW - Challenge
KW - Convolutional
KW - Database
KW - Deep
KW - Dimensional
KW - EmotiW
KW - Emotion
KW - Facial
KW - In-the-wild
KW - RECOLA
KW - Recognition
KW - Recurrent
KW - Valence
UR - http://www.scopus.com/inward/record.url?scp=85061710827&partnerID=8YFLogxK
U2 - 10.1007/s11263-019-01158-4
DO - 10.1007/s11263-019-01158-4
M3 - Article
AN - SCOPUS:85061710827
SN - 0920-5691
VL - 127
SP - 907
EP - 929
JO - International Journal of Computer Vision
JF - International Journal of Computer Vision
IS - 6-7
ER -