Cross-Corpus acoustic emotion recognition: Variances and strategies

Björn Schuller, Bogdan Vlasenko, Florian Eyben, Martin Wöllmer, André Stuhlsatz, Andreas Wendemuth, Gerhard Rigoll

Research output: Contribution to journalArticlepeer-review

342 Scopus citations

Abstract

As the recognition of emotion from speech has matured to a degree where it becomes applicable in real-life settings, it is time for a realistic view on obtainable performances. Most studies tend to overestimation in this respect: Acted data is often used rather than spontaneous data, results are reported on preselected prototypical data, and true speaker disjunctive partitioning is still less common than simple cross-validation. Even speaker disjunctive evaluation can give only a little insight into the generalization ability of today's emotion recognition engines since training and test data used for system development usually tend to be similar as far as recording conditions, noise overlay, language, and types of emotions are concerned. A considerably more realistic impression can be gathered by interset evaluation: We therefore show results employing six standard databases in a cross-corpora evaluation experiment which could also be helpful for learning about chances to add resources for training and overcoming the typical sparseness in the field. To better cope with the observed high variances, different types of normalization are investigated. 1.8 k individual evaluations in total indicate the crucial performance inferiority of inter to intracorpus testing.

Original languageEnglish
Article number5557843
Pages (from-to)119-131
Number of pages13
JournalIEEE Transactions on Affective Computing
Volume1
Issue number2
DOIs
StatePublished - Jul 2010

Keywords

  • Affective computing
  • cross-corpus evaluation
  • normalization
  • speech emotion recognition

Fingerprint

Dive into the research topics of 'Cross-Corpus acoustic emotion recognition: Variances and strategies'. Together they form a unique fingerprint.

Cite this