Low-level fusion of audio and video feature for multi-modal emotion recognition

Matthias Wimmer, Björn Schuller, Dejan Arsic, Gerhard Rigoll, Bernd Radig

Publikation: Beitrag in Buch/Bericht/KonferenzbandKonferenzbeitragBegutachtung

50 Zitate (Scopus)

Abstract

Bimodal emotion recognition through audiovisual feature fusion has been shown superior over each individual modality in the past. Still, synchronization of the two streams is a challenge, as many vision approaches work on a frame basis opposing audio turn- or chunk-basis. Therefore, late fusion schemes such as simple logic or voting strategies are commonly used for the overall estimation of underlying affect. However, early fusion is known to be more effective in many other multimodal recognition tasks. We therefore suggest a combined analysis by descriptive statistics of audio and video Low-Level-Descriptors for subsequent static SVM Classification. This strategy also allows for a combined feature-space optimization which will be discussed herein. The high effectiveness of this approach is shown on a database of 11.5h containing six emotional situations in an airplane scenario.

OriginalspracheEnglisch
TitelVISAPP 2008 - 3rd International Conference on Computer Vision Theory and Applications, Proceedings
Seiten145-151
Seitenumfang7
PublikationsstatusVeröffentlicht - 2008
Veranstaltung3rd International Conference on Computer Vision Theory and Applications, VISAPP 2008 - Funchal, Madeira, Portugal
Dauer: 22 Jan. 200825 Jan. 2008

Publikationsreihe

NameVISAPP 2008 - 3rd International Conference on Computer Vision Theory and Applications, Proceedings
Band2

Konferenz

Konferenz3rd International Conference on Computer Vision Theory and Applications, VISAPP 2008
Land/GebietPortugal
OrtFunchal, Madeira
Zeitraum22/01/0825/01/08

Fingerprint

Untersuchen Sie die Forschungsthemen von „Low-level fusion of audio and video feature for multi-modal emotion recognition“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren