Robust multi-modal group action recognition in meetings from disturbed videos with the asynchronous Hidden Markov model

Marc Al-Hames, Claus Lenz, Stephan Reiter, Joachim Schenk, Frank Wallhoff, Gerhard Rigoll

Publikation: Beitrag in Buch/Bericht/KonferenzbandKonferenzbeitragBegutachtung

7 Zitate (Scopus)

Abstract

The Asynchronous Hidden Markov Model (AHMM) models the joint likelihood of two observation sequences, even if the streams are not synchronised. We explain this concept and how the model is trained by the EM algorithm. We then show how the AHMM can be applied to the analysis of group action events in meetings from both clear and disturbed data. The AHMM outperforms an early fusion HMM by 5.7% recognition rate (a rel. error reduction of 38.5%) for clear data. For occluded data, the improvement is in average 6.5% recognition rate (rel. error red. 40%). Thus asynchronity is a dominant factor in meeting analysis, even if the data is disturbed. The AHMM exploits this and is therefore much more robust against disturbances.

OriginalspracheEnglisch
Titel2007 IEEE International Conference on Image Processing, ICIP 2007 Proceedings
SeitenII213-II216
DOIs
PublikationsstatusVeröffentlicht - 2007
Veranstaltung14th IEEE International Conference on Image Processing, ICIP 2007 - San Antonio, TX, USA/Vereinigte Staaten
Dauer: 16 Sept. 200719 Sept. 2007

Publikationsreihe

NameProceedings - International Conference on Image Processing, ICIP
Band2
ISSN (Print)1522-4880

Konferenz

Konferenz14th IEEE International Conference on Image Processing, ICIP 2007
Land/GebietUSA/Vereinigte Staaten
OrtSan Antonio, TX
Zeitraum16/09/0719/09/07

Fingerprint

Untersuchen Sie die Forschungsthemen von „Robust multi-modal group action recognition in meetings from disturbed videos with the asynchronous Hidden Markov model“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren