Search
Publication Authors
Dr. Muhammad Zeshan Afzal

Prof. Dr. Didier Stricker

Dr. Alain Pagani

Dr. Gerd Reis

Eric Thil

Keonna Cunningham

Dr. Oliver Wasenmüller

Dr. Gabriele Bleser
Dr. Bruno Mirbach

Dr. Jason Raphael Rambach

Dr. Bertram Taetz

Sk Aziz Ali

Mhd Rashed Al Koutayni
Murad Almadani
Alaa Alshubbak
Yuriy Anisimov

Jilliam Maria Diaz Barros

Ramy Battrawy
Iuliia Brishtel
Hammad Butt

Mahdi Chamseddine
Steve Dias da Cruz

Fangwen Shu

Torben Fetzer

Ahmet Firintepe
Sophie Folawiyo

David Michael Fürst
Kamalveerkaur Garewal

Christiano Couto Gava
Leif Eric Goebel

Tewodros Amberbir Habtegebrial
Simon Häring

Khurram Hashmi

Jigyasa Singh Katrolia

Andreas Kölsch
Onorina Kovalenko

Stephan Krauß
Paul Lesur

Muhammad Jameel Nawaz Malik
Michael Lorenz
Markus Miezal

Mina Ameli

Nareg Minaskan Karabid
Mohammad Minouei

Pramod Murthy

Mathias Musahl

Peter Neigel

Manthan Pancholi
Qinzhuan Qian

Engr. Kumail Raza

Dr. Nadia Robertini
María Alejandra Sánchez Marín
Dr. Kripasindhu Sarkar

Alexander Schäfer
Pascal Schneider

René Schuster

Mohamed Selim
Lukas Stefan Staecker

Dennis Stumpf

Yongzhi Su

Xiaoying Tan
Yaxu Xie

Dr. Vladislav Golyanik

Dr. Aditya Tewari

André Luiz Brandão
Structure-aware 3D Hand Pose Regression from a Single Depth Image
Structure-aware 3D Hand Pose Regression from a Single Depth Image
Muhammad Jameel Nawaz Malik, Ahmed Elhayek, Didier Stricker
EuroVR (EuroVR-2018), October 22-23, London, United Kingdom
- Abstract:
- Hand pose tracking in 3D is an essential task for many virtual reality (VR) applications such as games and manipulating virtual objects with bare hands. CNN-based learning methods achieve the state-of-the-art accuracy by directly regressing 3D pose from a single depth image. However, the 3D pose estimated by these methods is coarse and kinematically unstable due to independent learning of sparse joint positions. In this paper, we propose a novel structureaware CNN-based algorithm which learns to automatically segment the hand from a raw depth image and estimate 3D hand pose jointly with new structural constraints. The constraints include fingers lengths, distances of joints along the kinematic chain and fingers inter-distances. Learning these constraints help to maintain a structural relation between the estimated joint keypoints. Also, we convert sparse representation of hand skeleton to dense by performing n-points interpolation between the pairs of parent and child joints. By comprehensive evaluation, we show the effectiveness of our approach and demonstrate competitive performance to the state-of-the-art methods on the public NYU hand pose dataset.
- Keywords:
- Hand pose, Depth image, Convolutional Neural Network (CNN)