Multi-modal activity and dominance detection in smart meeting rooms

Benedikt Hörnler, Gerhard Rigoll

Publikation: Beitrag in Buch/Bericht/KonferenzbandKonferenzbeitragBegutachtung

6 Zitate (Scopus)

Abstract

In this paper a new approach for activity and dominance modeling in meetings is presented. For this purpose low level acoustic and visual features are extracted from audio and video capture devices. Hidden Markov Models (HMM) are used for the segmentation and classification of activity levels for each participant. Additionally, more semantic features are applied in a two-layer HMM approach. The experiments show that the acoustic feature is the most important one. The early fusion of acoustic and global-motion features achieves nearly as good results as the acoustic feature alone. All the other early fusion approaches are out-performed by the acoustic feature. More over, the two-layer model could not achieve the results of the acoustic features.

OriginalspracheEnglisch
Titel2009 IEEE International Conference on Acoustics, Speech, and Signal Processing - Proceedings, ICASSP 2009
Seiten1777-1780
Seitenumfang4
DOIs
PublikationsstatusVeröffentlicht - 2009
Veranstaltung2009 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2009 - Taipei, Taiwan
Dauer: 19 Apr. 200924 Apr. 2009

Publikationsreihe

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
ISSN (Print)1520-6149

Konferenz

Konferenz2009 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2009
Land/GebietTaiwan
OrtTaipei
Zeitraum19/04/0924/04/09

Fingerprint

Untersuchen Sie die Forschungsthemen von „Multi-modal activity and dominance detection in smart meeting rooms“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren