Abstract
In this contribution we introduce a novel approach to the combination of acoustic features and language information for a most robust automatic recognition of a speaker's emotion. Seven discrete emotional states are classified throughout the work. Firstly a model for the recognition of emotion by acoustic features is presented. The derived features of the signal-, pitch-, energy, and spectral contours are ranked by their quantitative contribution to the estimation of an emotion. Several different classification methods including linear classifiers, Gaussian Mixture Models, Neural Nets, and Support Vector Machines are compared by their performance within this task. Secondly an approach to emotion recognition by the spoken content is introduced applying Belief Network based spotting for emotional key-phrases. Finally the two information sources will be integrated in a soft decision fusion by using a Neural Net. The gain will be evaluated and compared to other advances. Two emotional speech corpora used for training and evaluation are described in detail and the results achieved applying the propagated novel advance to speaker emotion recognition are presented and discussed.
Originalsprache | Englisch |
---|---|
Seiten (von - bis) | I577-I580 |
Fachzeitschrift | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings |
Jahrgang | 1 |
Publikationsstatus | Veröffentlicht - 2004 |
Veranstaltung | Proceedings - IEEE International Conference on Acoustics, Speech, and Signal Processing - Montreal, Que, Kanada Dauer: 17 Mai 2004 → 21 Mai 2004 |