DeepHand: Robust Hand Pose Estimation by Completing a Matrix Imputed with Deep Features
Publication/Creation DateJune 21 2016
DescriptionAbstract: We propose DeepHand to estimate the 3D pose of a hand using depth data from commercial 3D sensors. We discriminatively train convolutional neural networks to output a low dimensional activation feature given a depth map. This activation feature vector is representative of the global or local joint angle parameters of a hand pose. We efficiently identify ’spatial’ nearest neighbors to the activation feature, from a database of features corresponding to synthetic depth maps, and store some ’temporal’ neighbors from previous frames. Our matrix completion algorithm uses these ’spatio-temporal’ activation features and the corresponding known pose parameter values to estimate the unknown pose parameters of the input feature vector. Our database of activation features supplements large viewpoint coverage and our hierarchical estimation of pose parameters is robust to occlusions. We show that our approach compares favorably to state-of-the-art methods while achieving real time performance (≈ 32 FPS) on a standard computer.
Date archivedAugust 25 2016
Last editedJuly 5 2021
How to cite this entry
Ayan Sinha, Chiho Choi. (June 21 2016). "DeepHand: Robust Hand Pose Estimation by Completing a Matrix Imputed with Deep Features". IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016. Fabric of Digital Life. https://fabricofdigitallife.com/Detail/objects/1758