Speech-based non-prototypical affect recognition for child-robot interaction in reverberated environments

Martin Wöllmer, Felix Weninger, Stefan Steidl, Anton Batliner, Björn Schuller

Research output: Contribution to journalConference articlepeer-review

5 Scopus citations

Abstract

We present a study on the effect of reverberation on acousticlinguistic recognition of non-prototypical emotions during child-robot interaction. Investigating the well-defined Interspeech 2009 Emotion Challenge task of recognizing negative emotions in children's speech, we focus on the impact of artificial and real reverberation conditions on the quality of linguistic features and on emotion recognition accuracy. To maintain acceptable recognition performance of both, spoken content and affective state, we consider matched and multi-condition training and apply our novel multi-stream automatic speech recognition system which outperforms conventional Hidden Markov Modeling. Depending on the acoustic condition, we obtain unweighted emotion recognition accuracies of between 65.4% and 70.3% applying our multi-stream system in combination with the SimpleLogistic algorithm for joint acoustic-linguistic analysis.

Original languageEnglish
Pages (from-to)3113-3116
Number of pages4
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
StatePublished - 2011
Event12th Annual Conference of the International Speech Communication Association, INTERSPEECH 2011 - Florence, Italy
Duration: 27 Aug 201131 Aug 2011

Keywords

  • Acoustic-linguistic emotion recognition
  • Affective computing
  • Child-robot interaction
  • Reverberation

Fingerprint

Dive into the research topics of 'Speech-based non-prototypical affect recognition for child-robot interaction in reverberated environments'. Together they form a unique fingerprint.

Cite this