Audiovisual behavior modeling by combined feature spaces

Björn Schuller, Dejan Arsic, Gerhard Rigoll, Matthias Wimmer, Bernd Radig

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

67 Scopus citations

Abstract

Great interest is recently shown in behavior modeling, especially in public surveillance tasks. In general it is agreed upon the benefits of use of several input cues as audio and video. Yet, synchronization and fusion of these information sources remains the main challenge. We therefore show results for a feature space combination, which allows for overall feature space optimization. Audio and video features are thereby firstly derived as Low-Level-Descriptors. Synchronization and feature combination is achieved by multivariate time-series analysis. Test-runs on a database of aggressive, cheerful, intoxicated, nervous, neutral, and tired behavior in an airplane situation show a significant improvement over each single modality.

Original languageEnglish
Title of host publication2007 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '07
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages733-736
Number of pages4
ISBN (Print)1424407281, 9781424407286
DOIs
StatePublished - 2007
Event2007 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '07 - Honolulu, HI, United States
Duration: 15 Apr 200720 Apr 2007

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume2
ISSN (Print)1520-6149

Conference

Conference2007 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '07
Country/TerritoryUnited States
CityHonolulu, HI
Period15/04/0720/04/07

Keywords

  • Affective computing
  • Audiovisual emotion recognition
  • Feature fusion
  • Synergistic multimodality

Fingerprint

Dive into the research topics of 'Audiovisual behavior modeling by combined feature spaces'. Together they form a unique fingerprint.

Cite this