Speech emotion recognition combining acoustic features and linguistic information in a hybrid support vector machine - Belief network architecture

Research output: Contribution to journalConference articlepeer-review

381 Scopus citations

Abstract

In this contribution we introduce a novel approach to the combination of acoustic features and language information for a most robust automatic recognition of a speaker's emotion. Seven discrete emotional states are classified throughout the work. Firstly a model for the recognition of emotion by acoustic features is presented. The derived features of the signal-, pitch-, energy, and spectral contours are ranked by their quantitative contribution to the estimation of an emotion. Several different classification methods including linear classifiers, Gaussian Mixture Models, Neural Nets, and Support Vector Machines are compared by their performance within this task. Secondly an approach to emotion recognition by the spoken content is introduced applying Belief Network based spotting for emotional key-phrases. Finally the two information sources will be integrated in a soft decision fusion by using a Neural Net. The gain will be evaluated and compared to other advances. Two emotional speech corpora used for training and evaluation are described in detail and the results achieved applying the propagated novel advance to speaker emotion recognition are presented and discussed.

Original languageEnglish
Pages (from-to)I577-I580
JournalICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume1
StatePublished - 2004
EventProceedings - IEEE International Conference on Acoustics, Speech, and Signal Processing - Montreal, Que, Canada
Duration: 17 May 200421 May 2004

Fingerprint

Dive into the research topics of 'Speech emotion recognition combining acoustic features and linguistic information in a hybrid support vector machine - Belief network architecture'. Together they form a unique fingerprint.

Cite this