Learning 3D Joint Constraints from Vision based Motion Capture Datasets

Learning 3D Joint Constraints from Vision based Motion Capture Datasets
(Hrsg.)

Abstract:
Pramod Murthy, Hammad T. Butt, Sandesh Hiremath, Alireza Khoshhal, Didier Stricker Published as Express Paper, Springer Open Journal - IPSJ Transactions on Computer Vision and Applications. IAPR Conference on Machine Vision Applications (MVA-2019) May 27-31 Tokyo Japan [Oral] . Realistic estimation and synthesis of articulated human motion must satisfy anatomical constraints on joint angles. A data-driven approach is used to learn human joint limits from 3D motion capture datasets. We represent joint constraints with a new formulation (s1; s2; ) using swing-twist representation in exponential maps form. Our parameterization is applied on Human3.6M dataset to create the lookup-map for each joint. These maps enable us to generate `synthetic' datasets in entire joint rotation space of a given joint. A set of neural network discriminators is then trained with synthetic datasets to learn valid/invalid joint rotations. The discriminators achieve accuracy of [94:4% - 99:4%] for dierent joints. We validate precision-accuracy trade-o of discriminators and qualitatively evaluate classied poses with an interactive tool. The learned discriminators can be used as `priors' for human pose estimation and motion synthesis. Paper DOI: https://ipsjcva.springeropen.com/articles/10.1186/s41074-019-0057-z