Abstract
One of the serious obstacles to the applications of speech emotion recognition systems in real-life settings is the lack of generalization of the emotion classifiers. Many recognition systems often present a dramatic drop in performance when tested on speech data obtained from different speakers, acoustic environments, linguistic content, and domain conditions. In this letter, we propose a novel unsupervised domain adaptation model, called Universum autoencoders, to improve the performance of the systems evaluated in mismatched training and test conditions. To address the mismatch, our proposed model not only learns discriminative information from labeled data, but also learns to incorporate the prior knowledge from unlabeled data into the learning. Experimental results on the labeled Geneva Whispered Emotion Corpus database plus other three unlabeled databases demonstrate the effectiveness of the proposed method when compared to other domain adaptation methods.
| Original language | English |
|---|---|
| Article number | 7862157 |
| Pages (from-to) | 500-504 |
| Number of pages | 5 |
| Journal | IEEE Signal Processing Letters |
| Volume | 24 |
| Issue number | 4 |
| DOIs | |
| State | Published - Apr 2017 |
| Externally published | Yes |
Keywords
- Deep learning
- domain adaptation
- speech emotion recognition
- universum autoencoders (U-AE)
Fingerprint
Dive into the research topics of 'Universum Autoencoder-Based Domain Adaptation for Speech Emotion Recognition'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver