Im2Hands: Learning Attentive Implicit Representation of Interacting
Two-Hand Shapes
- URL: http://arxiv.org/abs/2302.14348v3
- Date: Mon, 27 Mar 2023 17:08:27 GMT
- Title: Im2Hands: Learning Attentive Implicit Representation of Interacting
Two-Hand Shapes
- Authors: Jihyun Lee, Minhyuk Sung, Honggyu Choi, Tae-Kyun Kim
- Abstract summary: Implicit Two Hands (Im2Hands) is the first neural implicit representation of two interacting hands.
Im2Hands can produce fine-grained geometry of two hands with high hand-to-hand and hand-to-image coherency.
We experimentally demonstrate the effectiveness of Im2Hands on two-hand reconstruction in comparison to related methods.
- Score: 58.551154822792284
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present Implicit Two Hands (Im2Hands), the first neural implicit
representation of two interacting hands. Unlike existing methods on two-hand
reconstruction that rely on a parametric hand model and/or low-resolution
meshes, Im2Hands can produce fine-grained geometry of two hands with high
hand-to-hand and hand-to-image coherency. To handle the shape complexity and
interaction context between two hands, Im2Hands models the occupancy volume of
two hands - conditioned on an RGB image and coarse 3D keypoints - by two novel
attention-based modules responsible for (1) initial occupancy estimation and
(2) context-aware occupancy refinement, respectively. Im2Hands first learns
per-hand neural articulated occupancy in the canonical space designed for each
hand using query-image attention. It then refines the initial two-hand
occupancy in the posed space to enhance the coherency between the two hand
shapes using query-anchor attention. In addition, we introduce an optional
keypoint refinement module to enable robust two-hand shape estimation from
predicted hand keypoints in a single-image reconstruction scenario. We
experimentally demonstrate the effectiveness of Im2Hands on two-hand
reconstruction in comparison to related methods, where ours achieves
state-of-the-art results. Our code is publicly available at
https://github.com/jyunlee/Im2Hands.
Related papers
- OmniHands: Towards Robust 4D Hand Mesh Recovery via A Versatile Transformer [35.983309206845036]
We introduce OmniHands, a universal approach to recovering interactive hand meshes and their relative movement from monocular or multi-view inputs.
We develop a universal architecture with novel tokenization and contextual feature fusion strategies.
The efficacy of our approach is validated on several benchmark datasets.
arXiv Detail & Related papers (2024-05-30T17:59:02Z) - HandNeRF: Neural Radiance Fields for Animatable Interacting Hands [122.32855646927013]
We propose a novel framework to reconstruct accurate appearance and geometry with neural radiance fields (NeRF) for interacting hands.
We conduct extensive experiments to verify the merits of our proposed HandNeRF and report a series of state-of-the-art results.
arXiv Detail & Related papers (2023-03-24T06:19:19Z) - ACR: Attention Collaboration-based Regressor for Arbitrary Two-Hand
Reconstruction [30.073586754012645]
We present ACR (Attention Collaboration-based Regressor), which makes the first attempt to reconstruct hands in arbitrary scenarios.
We evaluate our method on various types of hand reconstruction datasets.
arXiv Detail & Related papers (2023-03-10T14:19:02Z) - Decoupled Iterative Refinement Framework for Interacting Hands
Reconstruction from a Single RGB Image [30.24438569170251]
We propose a decoupled iterative refinement framework to achieve pixel-alignment hand reconstruction.
Our method outperforms all existing two-hand reconstruction methods by a large margin on the InterHand2.6M dataset.
arXiv Detail & Related papers (2023-02-05T15:46:57Z) - 3D Interacting Hand Pose Estimation by Hand De-occlusion and Removal [85.30756038989057]
Estimating 3D interacting hand pose from a single RGB image is essential for understanding human actions.
We propose to decompose the challenging interacting hand pose estimation task and estimate the pose of each hand separately.
Experiments show that the proposed method significantly outperforms previous state-of-the-art interacting hand pose estimation approaches.
arXiv Detail & Related papers (2022-07-22T13:04:06Z) - Learning to Disambiguate Strongly Interacting Hands via Probabilistic
Per-pixel Part Segmentation [84.28064034301445]
Self-similarity, and the resulting ambiguities in assigning pixel observations to the respective hands, is a major cause of the final 3D pose error.
We propose DIGIT, a novel method for estimating the 3D poses of two interacting hands from a single monocular image.
We experimentally show that the proposed approach achieves new state-of-the-art performance on the InterHand2.6M dataset.
arXiv Detail & Related papers (2021-07-01T13:28:02Z) - RGB2Hands: Real-Time Tracking of 3D Hand Interactions from Monocular RGB
Video [76.86512780916827]
We present the first real-time method for motion capture of skeletal pose and 3D surface geometry of hands from a single RGB camera.
In order to address the inherent depth ambiguities in RGB data, we propose a novel multi-task CNN.
We experimentally verify the individual components of our RGB two-hand tracking and 3D reconstruction pipeline.
arXiv Detail & Related papers (2021-06-22T12:53:56Z) - Real-time Pose and Shape Reconstruction of Two Interacting Hands With a
Single Depth Camera [79.41374930171469]
We present a novel method for real-time pose and shape reconstruction of two strongly interacting hands.
Our approach combines an extensive list of favorable properties, namely it is marker-less.
We show state-of-the-art results in scenes that exceed the complexity level demonstrated by previous work.
arXiv Detail & Related papers (2021-06-15T11:39:49Z) - Skeleton-aware multi-scale heatmap regression for 2D hand pose
estimation [1.0152838128195467]
We propose a new deep learning-based framework that consists of two main modules.
The former presents a segmentation-based approach to detect the hand skeleton and localize the hand bounding box.
The second module regresses the 2D joint locations through a multi-scale heatmap regression approach.
arXiv Detail & Related papers (2021-05-23T10:23:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.