The role of task and acoustic similarity in audio transfer learning: Insights from the speech emotion recognition case

Andreas Triantafyllopoulos, Björn W. Schuller

Research output: Contribution to journalConference articlepeer-review

16 Scopus citations

Abstract

With the rise of deep learning, deep knowledge transfer has emerged as one of the most effective techniques for getting state-of-the-art performance using deep neural networks. A lot of recent research has focused on understanding the mechanisms of transfer learning in the image and language domains. We perform a similar investigation for the case of speech emotion recognition (SER), and conclude that transfer learning for SER is influenced both by the choice of pre-training task and by the differences in acoustic conditions between the upstream and downstream data sets, with the former having a bigger impact. The effect of each factor is isolated by first transferring knowledge between different tasks on the same data, and then from the original data to corrupted versions of it but for the same task. We also demonstrate that layers closer to the input see more adaptation than ones closer to the output in both cases, a finding which explains why previous works often found it necessary to fine-tune all layers during transfer learning.

Original languageEnglish
Pages (from-to)7268-7272
Number of pages5
JournalICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume2021-June
DOIs
StatePublished - 2021
Externally publishedYes
Event2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021 - Virtual, Toronto, Canada
Duration: 6 Jun 202111 Jun 2021

Keywords

  • Representation learning
  • Speech emotion recognition
  • Transfer learning

Fingerprint

Dive into the research topics of 'The role of task and acoustic similarity in audio transfer learning: Insights from the speech emotion recognition case'. Together they form a unique fingerprint.

Cite this