Learning weighted joint-based features for action recognition using depth camera

Guang Chen, Daniel Clarke, Alois Knoll

Publikation: Beitrag in Buch/Bericht/KonferenzbandKonferenzbeitragBegutachtung

2 Zitate (Scopus)

Abstract

Human action recognition based on joints is a challenging task. The 3D positions of the tracked joints are very noisy if occlusions occur, which increases the intra-class variations in the actions. In this paper, we propose a novel approach to recognize human actions with weighted joint-based features. Previous work has focused on hand-tuned joint-based features, which are difficult and time-consuming to be extended to other modalities. In contrast, we compute the joint-based features using an unsupervised learning approach. To capture the intraclass variance, a multiple kernel learning approach is employed to learn the skeleton structure that combine these joints-base features. We test our algorithm on action application using Microsoft Research Action3D (MSRAction3D) dataset. Experimental evaluation shows that the proposed approach outperforms state-of-theart action recognition algorithms on depth videos.

OriginalspracheEnglisch
TitelVISAPP 2014 - Proceedings of the 9th International Conference on Computer Vision Theory and Applications
Herausgeber (Verlag)SciTePress
Seiten549-556
Seitenumfang8
ISBN (Print)9789897580048
DOIs
PublikationsstatusVeröffentlicht - 2014
Veranstaltung9th International Conference on Computer Vision Theory and Applications, VISAPP 2014 - Lisbon, Portugal
Dauer: 5 Jan. 20148 Jan. 2014

Publikationsreihe

NameVISAPP 2014 - Proceedings of the 9th International Conference on Computer Vision Theory and Applications
Band2

Konferenz

Konferenz9th International Conference on Computer Vision Theory and Applications, VISAPP 2014
Land/GebietPortugal
OrtLisbon
Zeitraum5/01/148/01/14

Fingerprint

Untersuchen Sie die Forschungsthemen von „Learning weighted joint-based features for action recognition using depth camera“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren