EmoNet: A Transfer Learning Framework for Multi-Corpus Speech Emotion Recognition

Maurice Gerczuk, Shahin Amiriparian, Sandra Ottl, Bjorn W. Schuller

Research output: Contribution to journalArticlepeer-review

13 Scopus citations

Abstract

In this manuscript, the topic of multi-corpus Speech Emotion Recognition (SER) is approached from a deep transfer learning perspective. A large corpus of emotional speech data, EmoSet, is assembled from a number of existing Speech Emotion Recognition (SER) corpora. In total, EmoSet contains 84 181 audio recordings from 26 SER corpora with a total duration of over 65 hours. The corpus is then utilised to create a novel framework for multi-corpus SER and general audio recognition, namely EmoNet. A combination of a deep ResNet architecture and residual adapters is transferred from the field of multi-domain visual recognition to multi-corpus SER on EmoSet. The introduced residual adapter approach enables parameter efficient training of a multi-domain SER model on all 26 corpora. A shared model with only 3.5 times the number of parameters of a model trained on a single database leads to increased performance for 21 of the 26 corpora in EmoSet. Using repeated training runs and Almost Stochastic Order with significance level of α = 0.05 α=0.05, these improvements are further significant for 15 datasets while there are just three corpora that see only significant decreases across the residual adapter transfer experiments. Finally, we make our EmoNet framework publicly available for users and developers at https://github.com/EIHW/EmoNet.

Original languageEnglish
Pages (from-to)1472-1487
Number of pages16
JournalIEEE Transactions on Affective Computing
Volume14
Issue number2
DOIs
StatePublished - 1 Apr 2023
Externally publishedYes

Keywords

  • audio processing
  • computational paralinguistics
  • multi-corpus
  • multi-domain learning
  • Speech emotion recognition
  • transfer learning

Fingerprint

Dive into the research topics of 'EmoNet: A Transfer Learning Framework for Multi-Corpus Speech Emotion Recognition'. Together they form a unique fingerprint.

Cite this