DeepHand: Robust Hand Pose Estimation by Completing a Matrix Imputed with Deep Features


Publication/Creation Date
June 21 2016
Creators/Contributors
Ayan Sinha (creator)
Chiho Choi (creator)
Karthik Ramani (contributor)
Purdue University (contributor)
C Design Lab (contributor)
Media Type
Corporate Video
Persuasive Intent
Academic
Discursive Type
Inventions
Description
Abstract: We propose DeepHand to estimate the 3D pose of a hand using depth data from commercial 3D sensors. We discriminatively train convolutional neural networks to output a low dimensional activation feature given a depth map. This activation feature vector is representative of the global or local joint angle parameters of a hand pose. We efficiently identify ’spatial’ nearest neighbors to the activation feature, from a database of features corresponding to synthetic depth maps, and store some ’temporal’ neighbors from previous frames. Our matrix completion algorithm uses these ’spatio-temporal’ activation features and the corresponding known pose parameter values to estimate the unknown pose parameters of the input feature vector. Our database of activation features supplements large viewpoint coverage and our hierarchical estimation of pose parameters is robust to occlusions. We show that our approach compares favorably to state-of-the-art methods while achieving real time performance (≈ 32 FPS) on a standard computer.
HCI Platform
Wearables
Relation to Body
On
Related Body Part
Wrist
Keywords
Research
Marketing Keywords
DeepHand

Date archived
August 25 2016
Last edited
July 5 2021
How to cite this entry
Ayan Sinha, Chiho Choi. (June 21 2016). "DeepHand: Robust Hand Pose Estimation by Completing a Matrix Imputed with Deep Features". IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016. Fabric of Digital Life. https://fabricofdigitallife.com/Detail/objects/1758