TY - JOUR
T1 - Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data
AU - Ringeval, Fabien
AU - Eyben, Florian
AU - Kroupi, Eleni
AU - Yuce, Anil
AU - Thiran, Jean Philippe
AU - Ebrahimi, Touradj
AU - Lalanne, Denis
AU - Schuller, Björn
N1 - Publisher Copyright:
© 2014 Elsevier B.V. All rights reserved.
PY - 2015/11/15
Y1 - 2015/11/15
N2 - Automatic emotion recognition systems based on supervised machine learning require reliable annotation of affective behaviours to build useful models. Whereas the dimensional approach is getting more and more popular for rating affective behaviours in continuous time domains, e.g., arousal and valence, methodologies to take into account reaction lags of the human raters are still rare. We therefore investigate the relevance of using machine learning algorithms able to integrate contextual information in the modelling, like long short-term memory recurrent neural networks do, to automatically predict emotion from several (asynchronous) raters in continuous time domains, i.e., arousal and valence. Evaluations are performed on the recently proposed RECOLA multimodal database (27 subjects, 5 min of data and six raters for each), which includes audio, video, and physiological (ECG, EDA) data. In fact, studies uniting audiovisual and physiological information are still very rare. Features are extracted with various window sizes for each modality and performance for the automatic emotion prediction is compared for both different architectures of neural networks and fusion approaches (feature-level/decision-level). The results show that: (i) LSTM network can deal with (asynchronous) dependencies found between continuous ratings of emotion with video data, (ii) the prediction of the emotional valence requires longer analysis window than for arousal and (iii) a decision-level fusion leads to better performance than a feature-level fusion. The best performance (concordance correlation coefficient) for the multimodal emotion prediction is 0.804 for arousal and 0.528 for valence.
AB - Automatic emotion recognition systems based on supervised machine learning require reliable annotation of affective behaviours to build useful models. Whereas the dimensional approach is getting more and more popular for rating affective behaviours in continuous time domains, e.g., arousal and valence, methodologies to take into account reaction lags of the human raters are still rare. We therefore investigate the relevance of using machine learning algorithms able to integrate contextual information in the modelling, like long short-term memory recurrent neural networks do, to automatically predict emotion from several (asynchronous) raters in continuous time domains, i.e., arousal and valence. Evaluations are performed on the recently proposed RECOLA multimodal database (27 subjects, 5 min of data and six raters for each), which includes audio, video, and physiological (ECG, EDA) data. In fact, studies uniting audiovisual and physiological information are still very rare. Features are extracted with various window sizes for each modality and performance for the automatic emotion prediction is compared for both different architectures of neural networks and fusion approaches (feature-level/decision-level). The results show that: (i) LSTM network can deal with (asynchronous) dependencies found between continuous ratings of emotion with video data, (ii) the prediction of the emotional valence requires longer analysis window than for arousal and (iii) a decision-level fusion leads to better performance than a feature-level fusion. The best performance (concordance correlation coefficient) for the multimodal emotion prediction is 0.804 for arousal and 0.528 for valence.
KW - Audiovisual and physiological data
KW - Context-learning long short-term memory recurrent neural networks
KW - Continuous affect analysis
KW - Multi-task learning
KW - Multimodal fusion
KW - Multitime resolution features extraction
UR - http://www.scopus.com/inward/record.url?scp=84943197961&partnerID=8YFLogxK
U2 - 10.1016/j.patrec.2014.11.007
DO - 10.1016/j.patrec.2014.11.007
M3 - Article
AN - SCOPUS:84943197961
SN - 0167-8655
VL - 66
SP - 22
EP - 30
JO - Pattern Recognition Letters
JF - Pattern Recognition Letters
ER -