Hand Pose Estimation for Hand-Object Interaction Cases using Augmented Autoencoder

Shile Li, Haojie Wang, Dongheui Lee

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

6 Scopus citations

Abstract

Hand pose estimation with objects is challenging due to object occlusion and the lack of large annotated datasets. To tackle these issues, we propose an Augmented Autoencoder based deep learning method using augmented clean hand data. Our method takes 3D point cloud of a hand with an augmented object as input and encodes the input to latent representation of the hand. From the latent representation, our method decodes 3D hand pose and we propose to use an auxiliary point cloud decoder to assist the formation of the latent space. Through quantitative and qualitative evaluation on both synthetic dataset and real captured data containing objects, we demonstrate state-of-the-art performance for hand pose estimation with objects, even using only a small number of annotated hand-object samples.

Original languageEnglish
Title of host publication2020 IEEE International Conference on Robotics and Automation, ICRA 2020
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages993-999
Number of pages7
ISBN (Electronic)9781728173955
DOIs
StatePublished - May 2020
Externally publishedYes
Event2020 IEEE International Conference on Robotics and Automation, ICRA 2020 - Paris, France
Duration: 31 May 202031 Aug 2020

Publication series

NameProceedings - IEEE International Conference on Robotics and Automation
ISSN (Print)1050-4729

Conference

Conference2020 IEEE International Conference on Robotics and Automation, ICRA 2020
Country/TerritoryFrance
CityParis
Period31/05/2031/08/20

Fingerprint

Dive into the research topics of 'Hand Pose Estimation for Hand-Object Interaction Cases using Augmented Autoencoder'. Together they form a unique fingerprint.

Cite this