Learning weighted joint-based features for action recognition using depth camera

Guang Chen, Daniel Clarke, Alois Knoll

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

Human action recognition based on joints is a challenging task. The 3D positions of the tracked joints are very noisy if occlusions occur, which increases the intra-class variations in the actions. In this paper, we propose a novel approach to recognize human actions with weighted joint-based features. Previous work has focused on hand-tuned joint-based features, which are difficult and time-consuming to be extended to other modalities. In contrast, we compute the joint-based features using an unsupervised learning approach. To capture the intraclass variance, a multiple kernel learning approach is employed to learn the skeleton structure that combine these joints-base features. We test our algorithm on action application using Microsoft Research Action3D (MSRAction3D) dataset. Experimental evaluation shows that the proposed approach outperforms state-of-theart action recognition algorithms on depth videos.

Original languageEnglish
Title of host publicationVISAPP 2014 - Proceedings of the 9th International Conference on Computer Vision Theory and Applications
PublisherSciTePress
Pages549-556
Number of pages8
ISBN (Print)9789897580048
DOIs
StatePublished - 2014
Event9th International Conference on Computer Vision Theory and Applications, VISAPP 2014 - Lisbon, Portugal
Duration: 5 Jan 20148 Jan 2014

Publication series

NameVISAPP 2014 - Proceedings of the 9th International Conference on Computer Vision Theory and Applications
Volume2

Conference

Conference9th International Conference on Computer Vision Theory and Applications, VISAPP 2014
Country/TerritoryPortugal
CityLisbon
Period5/01/148/01/14

Keywords

  • Action Recognition
  • Depth Video Data
  • Unsupervised Learning
  • Weighted Joint-based Features

Fingerprint

Dive into the research topics of 'Learning weighted joint-based features for action recognition using depth camera'. Together they form a unique fingerprint.

Cite this