Improving generalisation and robustness of acoustic affect recognition

Publikation: Beitrag in Buch/Bericht/KonferenzbandKonferenzbeitragBegutachtung

5 Zitate (Scopus)

Abstract

Emotion recognition in real-life conditions faces several challenging factors, which most studies on emotion recognition do not consider. Such factors include background noise, varying recording levels, and acoustic properties of the environment, for example. This paper presents a systematic evaluation of the influence of background noise of various types and SNRs, as well as recording level variations on the performance of automatic emotion recognition from speech. Both, natural and spontaneous as well as acted/prototypical emotions are considered. Besides the well known influence of additive noise, a significant influence of the recording level on the recognition performance is observed. Multi-condition learning with various noise types and recording levels is proposed as a way to increase robustness of methods based on standard acoustic feature sets and commonly used classifiers. It is compared to matched conditions learning and is found to be almost on par for many settings.

OriginalspracheEnglisch
TitelICMI'12 - Proceedings of the ACM International Conference on Multimodal Interaction
Seiten517-521
Seitenumfang5
DOIs
PublikationsstatusVeröffentlicht - 2012
Veranstaltung14th ACM International Conference on Multimodal Interaction, ICMI 2012 - Santa Monica, CA, USA/Vereinigte Staaten
Dauer: 22 Okt. 201226 Okt. 2012

Publikationsreihe

NameICMI'12 - Proceedings of the ACM International Conference on Multimodal Interaction

Konferenz

Konferenz14th ACM International Conference on Multimodal Interaction, ICMI 2012
Land/GebietUSA/Vereinigte Staaten
OrtSanta Monica, CA
Zeitraum22/10/1226/10/12

Fingerprint

Untersuchen Sie die Forschungsthemen von „Improving generalisation and robustness of acoustic affect recognition“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren