TY - GEN
T1 - Depth-Based 3D Hand Pose Estimation
T2 - 31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018
AU - Yuan, Shanxin
AU - Garcia-Hernando, Guillermo
AU - Stenger, Bjorn
AU - Moon, Gyeongsik
AU - Chang, Ju Yong
AU - Lee, Kyoung Mu
AU - Molchanov, Pavlo
AU - Kautz, Jan
AU - Honari, Sina
AU - Ge, Liuhao
AU - Yuan, Junsong
AU - Chen, Xinghao
AU - Wang, Guijin
AU - Yang, Fan
AU - Akiyama, Kai
AU - Wu, Yang
AU - Wan, Qingfu
AU - Madadi, Meysam
AU - Escalera, Sergio
AU - Li, Shile
AU - Lee, Dongheui
AU - Oikonomidis, Iason
AU - Argyros, Antonis
AU - Kim, Tae Kyun
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/12/14
Y1 - 2018/12/14
N2 - In this paper, we strive to answer two questions: What is the current state of 3D hand pose estimation from depth images? And, what are the next challenges that need to be tackled? Following the successful Hands In the Million Challenge (HIM2017), we investigate the top 10 state-of-the-art methods on three tasks: single frame 3D pose estimation, 3D hand tracking, and hand pose estimation during object interaction. We analyze the performance of different CNN structures with regard to hand shape, joint visibility, view point and articulation distributions. Our findings include: (1) ISOlated 3D hand pose estimation achieves low mean errors (10 mm) in the view point range of [70, 120] degrees, but it is far from being solved for extreme view points; (2) 3D volumetric representations outperform 2D CNNs, better capturing the spatial structure of the depth data; (3) Discriminative methods still generalize poorly to unseen hand shapes; (4) While joint occlusions pose a challenge for most methods, explicit modeling of structure constraints can significantly narrow the gap between errors on visible and occluded joints.
AB - In this paper, we strive to answer two questions: What is the current state of 3D hand pose estimation from depth images? And, what are the next challenges that need to be tackled? Following the successful Hands In the Million Challenge (HIM2017), we investigate the top 10 state-of-the-art methods on three tasks: single frame 3D pose estimation, 3D hand tracking, and hand pose estimation during object interaction. We analyze the performance of different CNN structures with regard to hand shape, joint visibility, view point and articulation distributions. Our findings include: (1) ISOlated 3D hand pose estimation achieves low mean errors (10 mm) in the view point range of [70, 120] degrees, but it is far from being solved for extreme view points; (2) 3D volumetric representations outperform 2D CNNs, better capturing the spatial structure of the depth data; (3) Discriminative methods still generalize poorly to unseen hand shapes; (4) While joint occlusions pose a challenge for most methods, explicit modeling of structure constraints can significantly narrow the gap between errors on visible and occluded joints.
UR - http://www.scopus.com/inward/record.url?scp=85061739188&partnerID=8YFLogxK
U2 - 10.1109/CVPR.2018.00279
DO - 10.1109/CVPR.2018.00279
M3 - Conference contribution
AN - SCOPUS:85061739188
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
SP - 2636
EP - 2645
BT - Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018
PB - IEEE Computer Society
Y2 - 18 June 2018 through 22 June 2018
ER -