DeepHPS: End-to-end Estimation of 3D Hand Pose and Shape by Learning from Synthetic Depth
DeepHPS: End-to-end Estimation of 3D Hand Pose and Shape by Learning from Synthetic Depth
International Conference on 3D Vision (3DVision-2018), September 5-8, Verona, Italy
- Abstract:
- Articulated hand pose and shape estimation is an important problem for vision-based applications such as augmented reality and animation. In contrast to the existing methods which optimize only for joint positions, we propose a fully supervised deep network which learns to jointly estimate a full 3D hand mesh representation and pose from a single depth image. To this end, a CNN architecture is employed to estimate parametric representations i.e. hand pose, bone scales and complex shape parameters. Then, a novel hand pose and shape layer, embedded inside our deep framework, produces 3D joint positions and hand mesh.Lack of sufficient training data with varying hand shapes limits the generalized performance of learning based methods. Also, manually annotating real data is uboptimal. Therefore, we present SynHand5M: a million-scale synthetic dataset with accurate joint annotations, segmentation masks and mesh files of depth maps. Among model based learning (hybrid) methods, we show improved results on our dataset and two of the public benchmarks i.e. NYU and ICVL. Also, by employing a joint training strategy with real and synthetic data, we recover 3D hand mesh and pose from real images in 3.7ms. https://cloud.dfki.de/owncloud/index.php/s/iCMRF7a5FkXrdpn
- Keywords:
- 3D hand pose and shape, convolutional neural networks, depth image