Silhouette-Net: 3D Hand Pose Estimation from Silhouettes
- URL: http://arxiv.org/abs/1912.12436v1
- Date: Sat, 28 Dec 2019 10:29:42 GMT
- Title: Silhouette-Net: 3D Hand Pose Estimation from Silhouettes
- Authors: Kuo-Wei Lee, Shih-Hung Liu, Hwann-Tzong Chen, Koichi Ito
- Abstract summary: Existing approaches mainly consider different input modalities and settings, such as monocular RGB, multi-view RGB, depth, or point cloud.
We present a new architecture that automatically learns a guidance from implicit depth perception and solves the ambiguity of hand pose through end-to-end training.
- Score: 16.266199156878056
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D hand pose estimation has received a lot of attention for its wide range of
applications and has made great progress owing to the development of deep
learning. Existing approaches mainly consider different input modalities and
settings, such as monocular RGB, multi-view RGB, depth, or point cloud, to
provide sufficient cues for resolving variations caused by self occlusion and
viewpoint change. In contrast, this work aims to address the less-explored idea
of using minimal information to estimate 3D hand poses. We present a new
architecture that automatically learns a guidance from implicit depth
perception and solves the ambiguity of hand pose through end-to-end training.
The experimental results show that 3D hand poses can be accurately estimated
from solely {\em hand silhouettes} without using depth maps. Extensive
evaluations on the {\em 2017 Hands In the Million Challenge} (HIM2017)
benchmark dataset further demonstrate that our method achieves comparable or
even better performance than recent depth-based approaches and serves as the
state-of-the-art of its own kind on estimating 3D hand poses from silhouettes.
Related papers
- SHARP: Segmentation of Hands and Arms by Range using Pseudo-Depth for Enhanced Egocentric 3D Hand Pose Estimation and Action Recognition [5.359837526794863]
Hand pose represents key information for action recognition in the egocentric perspective.
We propose to improve egocentric 3D hand pose estimation based on RGB frames only by using pseudo-depth images.
arXiv Detail & Related papers (2024-08-19T14:30:29Z) - 3D Interacting Hand Pose Estimation by Hand De-occlusion and Removal [85.30756038989057]
Estimating 3D interacting hand pose from a single RGB image is essential for understanding human actions.
We propose to decompose the challenging interacting hand pose estimation task and estimate the pose of each hand separately.
Experiments show that the proposed method significantly outperforms previous state-of-the-art interacting hand pose estimation approaches.
arXiv Detail & Related papers (2022-07-22T13:04:06Z) - TriHorn-Net: A Model for Accurate Depth-Based 3D Hand Pose Estimation [8.946655323517092]
TriHorn-Net is a novel model that uses specific innovations to improve hand pose estimation accuracy on depth images.
The first innovation is the decomposition of the 3D hand pose estimation into the estimation of 2D joint locations in the depth image space.
The second innovation is PixDropout, which is, to the best of our knowledge, the first appearance-based data augmentation method for hand depth images.
arXiv Detail & Related papers (2022-06-14T19:08:42Z) - Efficient Annotation and Learning for 3D Hand Pose Estimation: A Survey [23.113633046349314]
3D hand pose estimation has potential to enable various applications, such as video understanding, AR/VR, and robotics.
However, the performance of models is tied to the quality and quantity of annotated 3D hand poses.
We examine methods for learning 3D hand poses when annotated data are scarce, including self-supervised pretraining, semi-supervised learning, and domain adaptation.
arXiv Detail & Related papers (2022-06-05T20:18:52Z) - Efficient Virtual View Selection for 3D Hand Pose Estimation [50.93751374572656]
We propose a new virtual view selection and fusion module for 3D hand pose estimation from single depth.
Our proposed virtual view selection and fusion module is both effective for 3D hand pose estimation.
arXiv Detail & Related papers (2022-03-29T11:57:53Z) - Model-based 3D Hand Reconstruction via Self-Supervised Learning [72.0817813032385]
Reconstructing a 3D hand from a single-view RGB image is challenging due to various hand configurations and depth ambiguity.
We propose S2HAND, a self-supervised 3D hand reconstruction network that can jointly estimate pose, shape, texture, and the camera viewpoint.
For the first time, we demonstrate the feasibility of training an accurate 3D hand reconstruction network without relying on manual annotations.
arXiv Detail & Related papers (2021-03-22T10:12:43Z) - MM-Hand: 3D-Aware Multi-Modal Guided Hand Generative Network for 3D Hand
Pose Synthesis [81.40640219844197]
Estimating the 3D hand pose from a monocular RGB image is important but challenging.
A solution is training on large-scale RGB hand images with accurate 3D hand keypoint annotations.
We have developed a learning-based approach to synthesize realistic, diverse, and 3D pose-preserving hand images.
arXiv Detail & Related papers (2020-10-02T18:27:34Z) - HandVoxNet: Deep Voxel-Based Network for 3D Hand Shape and Pose
Estimation from a Single Depth Map [72.93634777578336]
We propose a novel architecture with 3D convolutions trained in a weakly-supervised manner.
The proposed approach improves over the state of the art by 47.8% on the SynHand5M dataset.
Our method produces visually more reasonable and realistic hand shapes on NYU and BigHand2.2M datasets.
arXiv Detail & Related papers (2020-04-03T14:27:16Z) - Measuring Generalisation to Unseen Viewpoints, Articulations, Shapes and
Objects for 3D Hand Pose Estimation under Hand-Object Interaction [137.28465645405655]
HANDS'19 is a challenge to evaluate the abilities of current 3D hand pose estimators (HPEs) to interpolate and extrapolate the poses of a training set.
We show that the accuracy of state-of-the-art methods can drop, and that they fail mostly on poses absent from the training set.
arXiv Detail & Related papers (2020-03-30T19:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.