Simultaneous Hand Pose and Skeleton Bone-Lengths Estimation from a Single Depth Image

Simultaneous Hand Pose and Skeleton Bone-Lengths Estimation from a Single Depth Image
Muhammad Jameel Nawaz Malik, Ahmed Elhayek, Didier Stricker
International Conference on 3DVision (3DV-2017), 5th, October 10-12, Qingdao, China

Abstract:
Articulated hand pose estimation is a challenging tas for human-computer interaction. The state-of-the-art hand pose estimation algorithms work only with one or a few subjects for which they have been calibrated or trained. Particularly, the hybrid methods based on learning followed by model fitting or model based deep learning do not explicitly consider varying hand shapes and sizes. In this work, we introduce a novel hybrid algorithm for estimating the 3D hand pose as well as bone-lengths of the hand skeleton at the same time, from a single depth image. The proposed CNN architecture learns hand pose parameters and scale parameters associated with the bone-lengths simultaneously. Subsequently, a new hybrid forward kinematics layer employs both parameters to estimate 3D joint positions of the hand. For end-to-end training, we combine three public datasets NYU, ICVL and MSRA-2015 in one unified format to achieve large variation in hand shapes and sizes. Among hybrid methods, our method shows improved accuracy over the state-of-the-art on the combined dataset and the ICVL dataset that contain multiple subjects. Also, our algorithm is demonstrated to work well with unseen images.
Keywords:
Simultaneous Hand Pose and Skeleton Bone-Lengths Estimation from a Single Depth Image