Search
Publication Authors

Prof. Dr. Didier Stricker

Dr. Alain Pagani

Dr. Gerd Reis

Eric Thil

Keonna Cunningham

Monika Miersch

Dr. Oliver Wasenmüller

Dr. Muhammad Zeshan Afzal

Dr. Gabriele Bleser

Dr. Muhammad Jameel Nawaz Malik

Dr. Bruno Mirbach

Dr. Jason Raphael Rambach

Dr. Nadia Robertini

Dr. René Schuster

Dr. Bertram Taetz

Ahmed Aboukhadra

Dr. Sk Aziz Ali

Mhd Rashed Al Koutayni

Yuriy Anisimov
Anmol Prasad

Muhammad Asad Ali

Jilliam Maria Diaz Barros

Ramy Battrawy
Katharina Bendig
Hammad Butt

Mahdi Chamseddine
Chun-Peng Chang
Steve Dias da Cruz
Fangwen Shu

Torben Fetzer

Ahmet Firintepe

Sophie Folawiyo

David Michael Fürst
Anshu Garg

Christiano Couto Gava
Suresh Guttikonda

Tewodros Amberbir Habtegebrial

Simon Häring

Khurram Azeem Hashmi

Dr. Anna Katharina Hebborn

Hamoun Heidarshenas
Henri Hoyez

Pragati Jaiswal

Alireza Javanmardi

Sai Srinivas Jeevanandam

Jigyasa Singh Katrolia

Matin Keshmiri

Andreas Kölsch
Ganesh Shrinivas Koparde
Onorina Kovalenko

Stephan Krauß
Bastian Krayer
Paul Lesur

Michael Lorenz

Dr. Markus Miezal

Mina Ameli

Nareg Minaskan Karabid

Mohammad Minouei

Shashank Mishra

Pramod Murthy

Mathias Musahl
Peter Neigel

Manthan Pancholi

Mariia Podguzova

Praveen Nathan
Qinzhuan Qian
Rishav

Marcel Rogge
María Alejandra Sánchez Marín
Dr. Kripasindhu Sarkar

Alexander Schäfer

Pascal Schneider

Dr. Mohamed Selim

Tahira Shehzadi
Lukas Stefan Staecker

Yongzhi Su

Xiaoying Tan
Pelle Thielmann
Dr. Mohit Vaishnav

Shaoxiang Wang
Christian Witte

Yaxu Xie

Vemburaj Yadav

Yu Zhou

Dr. Vladislav Golyanik

Dr. Aditya Tewari

André Luiz Brandão
Publication Archive
New title
- @VISOR
- @VISOR-HH
- 4DUS
- ActivityPlus
- AlphaView
- AlterEgo
- Anna C-trus
- AR-Handbook
- ARinfuse
- ARVIDA
- AuRoRas
- AVES/DyMoSiHR
- AVILUS+
- Be-greifen
- BIONIC
- Body Analyzer
- CAPTURE
- Co2Team
- COGNITO
- CONTACT
- DAKARA
- DENSITY
- DYNAMICS
- EASY-IMP
- EMERGENT
- ENNOS
- EPOS
- Eyes Of Things
- GreifbAR
- HyperCOG
- HYSOCIATEA
- iACT
- IMCVO
- iMP
- Infinity
- IVHM
- IVMT
- KI-Absicherung
- LARA
- LiSA
- Marmorbild
- Micro-Dress
- Moveon
- NetVis
- Odysseus Studio
- On Eye
- OrcaM
- PAMAP
- ProWiLAN
- ServiceFactory
- SIMILAR
- SINNODIUM
- STREET3D
- SUDPLAN
- SwarmTrack
- SYNERGIE
- TuBUs-Pro
- VES
- VIDETE
- VIDP
- Virtual Try-On
- VisIMon
- VISTRA
- VIZTA
- You in 3D
HandVoxNet++: 3D Hand Shape and Pose Estimation using Voxel-Based Neural Networks
HandVoxNet++: 3D Hand Shape and Pose Estimation using Voxel-Based Neural Networks
Muhammad Jameel Nawaz Malik, Didier Stricker, Sk Aziz Ali, Vladislav Golyanik, Soshi Shimada, Ahmed Elhayek, Christian Theobalt
IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 1 Seiten 1-13 IEEE 11/2021 .
- Abstract:
- 3D hand shape and pose estimation from a single depth map is a new and challenging computer vision problem with many applications. Existing methods addressing it directly regress hand meshes via 2D convolutional neural networks, which leads to artefacts due to perspective distortions in the images. To address the limitations of the existing methods, we develop HandVoxNet++, i.e., a voxel-based deep network with 3D and graph convolutions trained in a fully supervised manner. The input to our network is a 3D voxelized-depth-map-based on the truncated signed distance function (TSDF). HandVoxNet++ relies on two hand shape representations. The first one is the 3D voxelized grid of hand shape, which does not preserve the mesh topology and which is the most accurate representation. The second representation is the hand surface that preserves the mesh topology. We combine the advantages of both representations by aligning the hand surface to the voxelized hand shape either with a new neural Graph-Convolutions-based Mesh Registration (GCN-MeshReg) or classical segment-wise Non-Rigid Gravitational Approach (NRGA++) which does not rely on training data. In extensive evaluations on three public benchmarks, i.e., SynHand5M, depth-based HANDS19 challenge and HO-3D, the proposed HandVoxNet++ achieves state-of-the-art performance. In this journal extension of our previous approach presented at CVPR 2020, we gain 41:09% and 13:7% higher shape alignment accuracy on SynHand5M and HANDS19 datasets, respectively. Our method is ranked first on the HANDS19 challenge dataset (Task 1: Depth-Based 3D Hand Pose Estimation) at the moment of the submission of our results to the portal in August 2020.