Accurate Emotion Strength Assessment for Seen and Unseen Speech Based on Data-Driven Deep Learning

Rui Liu, Berrak Sisman, Björn W. Schuller, Guanglai Gao, Haizhou Li

Research output: Contribution to journalConference articlepeer-review

7 Scopus citations

Abstract

Emotion classification of speech and assessment of the emotion strength are required in applications such as emotional text-to-speech and voice conversion. The emotion attribute ranking function based on Support Vector Machine (SVM) was proposed to predict emotion strength for emotional speech corpus. However, the trained ranking function doesn't generalize to new domains, which limits the scope of applications, especially for out-of-domain or unseen speech. In this paper, we propose a data-driven deep learning model, i.e. StrengthNet, to improve the generalization of emotion strength assessment for seen and unseen speech. This is achieved by the fusion of emotional data from various domains. We follow a multi-task learning network architecture that includes an acoustic encoder, a strength predictor, and an auxiliary emotion predictor. Experiments show that the predicted emotion strength of the proposed StrengthNet is highly correlated with ground truth scores for both seen and unseen speech. We release the source codes at: https://github.com/ttslr/StrengthNet.

Original languageEnglish
Pages (from-to)5493-5497
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Volume2022-September
DOIs
StatePublished - 2022
Externally publishedYes
Event23rd Annual Conference of the International Speech Communication Association, INTERSPEECH 2022 - Incheon, Korea, Republic of
Duration: 18 Sep 202222 Sep 2022

Keywords

  • Emotion strength
  • data-driven
  • deep learning

Fingerprint

Dive into the research topics of 'Accurate Emotion Strength Assessment for Seen and Unseen Speech Based on Data-Driven Deep Learning'. Together they form a unique fingerprint.

Cite this