Analyzing the subspaces obtained by dimensionality reduction for human action recognition from 3D data

Marco Körner, Joachim Denzler

Publikation: Beitrag in Buch/Bericht/KonferenzbandKonferenzbeitragBegutachtung

5 Zitate (Scopus)

Abstract

Since depth measuring devices for real-world scenarios became available in the recent past, the use of 3d data now comes more in focus of human action recognition. Due to the increased amount of data it seems to be advisable to model the trajectory of every landmark in the context of all other landmarks which is commonly done by dimensionality reduction techniques like PCA. In this paper we present an approach to directly use the subspaces (i.e. their basis vectors) for extracting features and classification of actions instead of projecting the landmark data themselves. This yields a fixedlength description of action sequences disregarding the number of provided frames. We give a comparison of various global techniques for dimensionality reduction and analyze their suitability for our proposed scheme. Experiments performed on the CMU Motion Capture dataset show promising recognition rates as well as robustness in the presence of noise and incorrect detection of landmarks.

OriginalspracheEnglisch
TitelProceedings - 2012 IEEE 9th International Conference on Advanced Video and Signal-Based Surveillance, AVSS 2012
Seiten130-135
Seitenumfang6
DOIs
PublikationsstatusVeröffentlicht - 2012
Extern publiziertJa
Veranstaltung2012 IEEE 9th International Conference on Advanced Video and Signal-Based Surveillance, AVSS 2012 - Beijing, China
Dauer: 18 Sept. 201221 Sept. 2012

Publikationsreihe

NameProceedings - 2012 IEEE 9th International Conference on Advanced Video and Signal-Based Surveillance, AVSS 2012

Konferenz

Konferenz2012 IEEE 9th International Conference on Advanced Video and Signal-Based Surveillance, AVSS 2012
Land/GebietChina
OrtBeijing
Zeitraum18/09/1221/09/12

Fingerprint

Untersuchen Sie die Forschungsthemen von „Analyzing the subspaces obtained by dimensionality reduction for human action recognition from 3D data“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren