Combining frame and turn-level information for robust recognition ofemotions within speech

Bogdan Vlasenko, Bjödrn Schuller, Andreas Wendemuth, Gerhard Rigoll

Publikation: Beitrag in Buch/Bericht/KonferenzbandKonferenzbeitragBegutachtung

28 Zitate (Scopus)

Abstract

Current approaches to the recognition of emotion within speech usually use statistic feature information obtained by application of functionals on turn- or chunk levels. Yet, it is well known that thereby important information on temporal sub-layers as the frame-level is lost. We therefore investigate the benefits of integration of such information within turn-level feature space. For frame-level analysis we use GMM for classification and 39 MFCC and energy features with CMS. In a subsequent step output scores are fed forward into a 1.4k large-feature-space turn-level SVM emotion recognition engine. Thereby we use a variety of Low-Level-Descriptors and functionals to cover prosodic, speech quality, and articulatory aspects. Extensive test-runs are carried out on the public databases EMO-DB and SUSAS. Speaker-independent analysis is faced by speaker normalization. Overall results highly emphasize the benefits of feature integration on diverse time scales.

OriginalspracheEnglisch
TitelInternational Speech Communication Association - 8th Annual Conference of the International Speech Communication Association, Interspeech 2007
Seiten2712-2715
Seitenumfang4
PublikationsstatusVeröffentlicht - 2007
Veranstaltung8th Annual Conference of the International Speech Communication Association, Interspeech 2007 - Antwerp, Belgien
Dauer: 27 Aug. 200731 Aug. 2007

Publikationsreihe

NameInternational Speech Communication Association - 8th Annual Conference of the International Speech Communication Association, Interspeech 2007
Band4

Konferenz

Konferenz8th Annual Conference of the International Speech Communication Association, Interspeech 2007
Land/GebietBelgien
OrtAntwerp
Zeitraum27/08/0731/08/07

Fingerprint

Untersuchen Sie die Forschungsthemen von „Combining frame and turn-level information for robust recognition ofemotions within speech“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren