Multitask Learning From Augmented Auxiliary Data for Improving Speech Emotion Recognition

Siddique Latif, Rajib Rana, Sara Khalifa, Raja Jurdak, Bjorn W. Schuller

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Despite the recent progress in speech emotion recognition (SER), state-of-the-art systems lack generalisation across different conditions. A key underlying reason for poor generalisation is the scarcity of emotion datasets, which is a significant roadblock to designing robust machine learning (ML) models. Recent works in SER focus on utilising multitask learning (MTL) methods to improve generalisation by learning shared representations. However, most of these studies propose MTL solutions with the requirement of meta labels for auxiliary tasks, which limits the training of SER systems. This paper proposes an MTL framework (MTL-AUG) that learns generalised representations from augmented data. We utilise augmentation-type classification and unsupervised reconstruction as auxiliary tasks, which allow training SER systems on augmented data without requiring any meta labels for auxiliary tasks. The semi-supervised nature of MTL-AUG allows for the exploitation of the abundant unlabelled data to further boost the performance of SER. We comprehensively evaluate the proposed framework in the following settings: (1) within corpus, (2) cross-corpus and cross-language, (3) noisy speech, (4) and adversarial attacks. Our evaluations using the widely used IEMOCAP, MSP-IMPROV, and EMODB datasets show improved results compared to existing state-of-the-art methods.

Original languageEnglish
Pages (from-to)3164-3176
Number of pages13
JournalIEEE Transactions on Affective Computing
Volume14
Issue number4
DOIs
StatePublished - 1 Oct 2023
Externally publishedYes

Keywords

  • Speech emotion recognition
  • multi task learning
  • representation learning

Fingerprint

Dive into the research topics of 'Multitask Learning From Augmented Auxiliary Data for Improving Speech Emotion Recognition'. Together they form a unique fingerprint.

Cite this