HandFoldingNet: A 3D Hand Pose Estimation Network Using
Multiscale-Feature Guided Folding of a 2D Hand Skeleton
- URL: http://arxiv.org/abs/2108.05545v1
- Date: Thu, 12 Aug 2021 05:52:44 GMT
- Title: HandFoldingNet: A 3D Hand Pose Estimation Network Using
Multiscale-Feature Guided Folding of a 2D Hand Skeleton
- Authors: Wencan Cheng, Jae Hyun Park and Jong Hwan Ko
- Abstract summary: This paper proposes HandFoldingNet, an accurate and efficient hand pose estimator.
The proposed model utilizes a folding-based decoder that folds a given 2D hand skeleton into the corresponding joint coordinates.
Experimental results show that the proposed model outperforms the existing methods on three hand pose benchmark datasets.
- Score: 4.1954750695245835
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With increasing applications of 3D hand pose estimation in various
human-computer interaction applications, convolution neural networks (CNNs)
based estimation models have been actively explored. However, the existing
models require complex architectures or redundant computational resources to
trade with the acceptable accuracy. To tackle this limitation, this paper
proposes HandFoldingNet, an accurate and efficient hand pose estimator that
regresses the hand joint locations from the normalized 3D hand point cloud
input. The proposed model utilizes a folding-based decoder that folds a given
2D hand skeleton into the corresponding joint coordinates. For higher
estimation accuracy, folding is guided by multi-scale features, which include
both global and joint-wise local features. Experimental results show that the
proposed model outperforms the existing methods on three hand pose benchmark
datasets with the lowest model parameter requirement. Code is available at
https://github.com/cwc1260/HandFold.
Related papers
- WiLoR: End-to-end 3D Hand Localization and Reconstruction in-the-wild [53.288327629960364]
We present a data-driven pipeline for efficient multi-hand reconstruction in the wild.
The proposed pipeline is composed of two components: a real-time fully convolutional hand localization and a high-fidelity transformer-based 3D hand reconstruction model.
Our approach outperforms previous methods in both efficiency and accuracy on popular 2D and 3D benchmarks.
arXiv Detail & Related papers (2024-09-18T18:46:51Z) - Two Hands Are Better Than One: Resolving Hand to Hand Intersections via Occupancy Networks [33.9893684177763]
Self-occlusions and finger articulation pose a significant problem to estimation.
We exploit an occupancy network that represents the hand's volume as a continuous manifold.
We design an intersection loss function to minimize the likelihood of hand-to-point intersections.
arXiv Detail & Related papers (2024-04-08T11:32:26Z) - HandDiff: 3D Hand Pose Estimation with Diffusion on Image-Point Cloud [60.47544798202017]
Hand pose estimation is a critical task in various human-computer interaction applications.
This paper proposes HandDiff, a diffusion-based hand pose estimation model that iteratively denoises accurate hand pose conditioned on hand-shaped image-point clouds.
Experimental results demonstrate that the proposed HandDiff significantly outperforms the existing approaches on four challenging hand pose benchmark datasets.
arXiv Detail & Related papers (2024-04-04T02:15:16Z) - HandNeRF: Neural Radiance Fields for Animatable Interacting Hands [122.32855646927013]
We propose a novel framework to reconstruct accurate appearance and geometry with neural radiance fields (NeRF) for interacting hands.
We conduct extensive experiments to verify the merits of our proposed HandNeRF and report a series of state-of-the-art results.
arXiv Detail & Related papers (2023-03-24T06:19:19Z) - End-to-end Weakly-supervised Single-stage Multiple 3D Hand Mesh
Reconstruction from a Single RGB Image [9.238322841389994]
We propose a single-stage pipeline for multi-hand reconstruction.
Specifically, we design a multi-head auto-encoder structure, where each head network shares the same feature map and outputs the hand center, pose and texture.
Our method outperforms the state-of-the-art model-based methods in both weakly-supervised and fully-supervised manners.
arXiv Detail & Related papers (2022-04-18T03:57:14Z) - A Skeleton-Driven Neural Occupancy Representation for Articulated Hands [49.956892429789775]
Hand ArticuLated Occupancy (HALO) is a novel representation of articulated hands that bridges the advantages of 3D keypoints and neural implicit surfaces.
We demonstrate the applicability of HALO to the task of conditional generation of hands that grasp 3D objects.
arXiv Detail & Related papers (2021-09-23T14:35:19Z) - HandVoxNet++: 3D Hand Shape and Pose Estimation using Voxel-Based Neural
Networks [71.09275975580009]
HandVoxNet++ is a voxel-based deep network with 3D and graph convolutions trained in a fully supervised manner.
HandVoxNet++ relies on two hand shape representations. The first one is the 3D voxelized grid of hand shape, which does not preserve the mesh topology.
We combine the advantages of both representations by aligning the hand surface to the voxelized hand shape either with a new neural Graph-Convolutions-based Mesh Registration (GCN-MeshReg) or classical segment-wise Non-Rigid Gravitational Approach (NRGA++) which
arXiv Detail & Related papers (2021-07-02T17:59:54Z) - A hybrid classification-regression approach for 3D hand pose estimation
using graph convolutional networks [1.0152838128195467]
We propose a two-stage GCN-based framework that learns per-pose relationship constraints.
The first phase quantizes the 2D/3D space to classify the joints into 2D/3D blocks based on their locality.
The second stage uses a GCN-based module that uses an adaptative nearest neighbor algorithm to determine joint relationships.
arXiv Detail & Related papers (2021-05-23T10:09:10Z) - MM-Hand: 3D-Aware Multi-Modal Guided Hand Generative Network for 3D Hand
Pose Synthesis [81.40640219844197]
Estimating the 3D hand pose from a monocular RGB image is important but challenging.
A solution is training on large-scale RGB hand images with accurate 3D hand keypoint annotations.
We have developed a learning-based approach to synthesize realistic, diverse, and 3D pose-preserving hand images.
arXiv Detail & Related papers (2020-10-02T18:27:34Z) - Two-hand Global 3D Pose Estimation Using Monocular RGB [0.0]
We tackle the challenging task of estimating global 3D joint locations for both hands via only monocular RGB input images.
We propose a novel multi-stage convolutional neural network based pipeline that accurately segments and locates the hands.
We present the first work that achieves accurate global 3D hand tracking on both hands using RGB-only inputs.
arXiv Detail & Related papers (2020-06-01T23:53:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.