LSTM-modeling of continuous emotions in an audiovisual affect recognition framework

Martin Wöllmer, Moritz Kaiser, Florian Eyben, Björn Schuller, Gerhard Rigoll

Research output: Contribution to journalArticlepeer-review

238 Scopus citations

Abstract

Automatically recognizing human emotions from spontaneous and non-prototypical real-life data is currently one of the most challenging tasks in the field of affective computing. This article presents our recent advances in assessing dimensional representations of emotion, such as arousal, expectation, power, and valence, in an audiovisual human-computer interaction scenario. Building on previous studies which demonstrate that long range context modeling tends to increase accuracies of emotion recognition, we propose a fully automatic audiovisual recognition approach based on Long Short-Term Memory (LSTM) modeling of word-level audio and video features. LSTM networks are able to incorporate knowledge about how emotions typically evolve over time so that the inferred emotion estimates are produced under consideration of an optimal amount of context. Extensive evaluations on the Audiovisual Sub-Challenge of the 2011 Audio/Visual Emotion Challenge show how acoustic, linguistic, and visual features contribute to the recognition of different affective dimensions as annotated in the SEMAINE data base.We apply the same acoustic features as used in the challenge baseline system whereas visual features are computed via a novel facial movement feature extractor. Comparing our results with the recognition scores of all Audiovisual Sub-Challenge participants, we find that the proposed LSTM-based technique leads to the best average recognition performance that has been reported for this task so far.

Original languageEnglish
Pages (from-to)153-163
Number of pages11
JournalImage and Vision Computing
Volume31
Issue number2
DOIs
StatePublished - 2013

Keywords

  • Context modeling
  • Emotion recognition
  • Facial movement features
  • Long short-term memory

Fingerprint

Dive into the research topics of 'LSTM-modeling of continuous emotions in an audiovisual affect recognition framework'. Together they form a unique fingerprint.

Cite this