UV-Based 3D Hand-Object Reconstruction with Grasp Optimization
- URL: http://arxiv.org/abs/2211.13429v1
- Date: Thu, 24 Nov 2022 05:59:23 GMT
- Title: UV-Based 3D Hand-Object Reconstruction with Grasp Optimization
- Authors: Ziwei Yu, Linlin Yang, You Xie, Ping Chen, Angela Yao
- Abstract summary: We propose a novel framework for 3D hand shape reconstruction and hand-object grasp optimization from a single RGB image.
Instead of approximating the contact regions with sparse points, we propose a dense representation in the form of a UV coordinate map.
Our pipeline increases hand shape reconstruction accuracy and produces a vibrant hand texture.
- Score: 23.06364591130636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel framework for 3D hand shape reconstruction and hand-object
grasp optimization from a single RGB image. The representation of hand-object
contact regions is critical for accurate reconstructions. Instead of
approximating the contact regions with sparse points, as in previous works, we
propose a dense representation in the form of a UV coordinate map. Furthermore,
we introduce inference-time optimization to fine-tune the grasp and improve
interactions between the hand and the object. Our pipeline increases hand shape
reconstruction accuracy and produces a vibrant hand texture. Experiments on
datasets such as Ho3D, FreiHAND, and DexYCB reveal that our proposed method
outperforms the state-of-the-art.
Related papers
- HandBooster: Boosting 3D Hand-Mesh Reconstruction by Conditional Synthesis and Sampling of Hand-Object Interactions [68.28684509445529]
We present HandBooster, a new approach to uplift the data diversity and boost the 3D hand-mesh reconstruction performance.
First, we construct versatile content-aware conditions to guide a diffusion model to produce realistic images with diverse hand appearances, poses, views, and backgrounds.
Then, we design a novel condition creator based on our similarity-aware distribution sampling strategies to deliberately find novel and realistic interaction poses that are distinctive from the training set.
arXiv Detail & Related papers (2024-03-27T13:56:08Z) - HandNeRF: Learning to Reconstruct Hand-Object Interaction Scene from a Single RGB Image [41.580285338167315]
This paper presents a method to learn hand-object interaction prior for reconstructing a 3D hand-object scene from a single RGB image.
We use the hand shape to constrain the possible relative configuration of the hand and object geometry.
We show that HandNeRF is able to reconstruct hand-object scenes of novel grasp configurations more accurately than comparable methods.
arXiv Detail & Related papers (2023-09-14T17:42:08Z) - HiFiHR: Enhancing 3D Hand Reconstruction from a Single Image via
High-Fidelity Texture [40.012406098563204]
We present HiFiHR, a high-fidelity hand reconstruction approach that utilizes render-and-compare in the learning-based framework from a single image.
Experimental results on public benchmarks including FreiHAND and HO-3D demonstrate that our method outperforms the state-of-the-art hand reconstruction methods in texture reconstruction quality.
arXiv Detail & Related papers (2023-08-25T18:48:40Z) - gSDF: Geometry-Driven Signed Distance Functions for 3D Hand-Object
Reconstruction [94.46581592405066]
We exploit the hand structure and use it as guidance for SDF-based shape reconstruction.
We predict kinematic chains of pose transformations and align SDFs with highly-articulated hand poses.
arXiv Detail & Related papers (2023-04-24T10:05:48Z) - HandNeRF: Neural Radiance Fields for Animatable Interacting Hands [122.32855646927013]
We propose a novel framework to reconstruct accurate appearance and geometry with neural radiance fields (NeRF) for interacting hands.
We conduct extensive experiments to verify the merits of our proposed HandNeRF and report a series of state-of-the-art results.
arXiv Detail & Related papers (2023-03-24T06:19:19Z) - What's in your hands? 3D Reconstruction of Generic Objects in Hands [49.12461675219253]
Our work aims to reconstruct hand-held objects given a single RGB image.
In contrast to prior works that typically assume known 3D templates and reduce the problem to 3D pose estimation, our work reconstructs generic hand-held object without knowing their 3D templates.
arXiv Detail & Related papers (2022-04-14T17:59:02Z) - Geometric Correspondence Fields: Learned Differentiable Rendering for 3D
Pose Refinement in the Wild [96.09941542587865]
We present a novel 3D pose refinement approach based on differentiable rendering for objects of arbitrary categories in the wild.
In this way, we precisely align 3D models to objects in RGB images which results in significantly improved 3D pose estimates.
We evaluate our approach on the challenging Pix3D dataset and achieve up to 55% relative improvement compared to state-of-the-art refinement methods in multiple metrics.
arXiv Detail & Related papers (2020-07-17T12:34:38Z) - SparseFusion: Dynamic Human Avatar Modeling from Sparse RGBD Images [49.52782544649703]
We propose a novel approach to reconstruct 3D human body shapes based on a sparse set of RGBD frames.
The main challenge is how to robustly fuse these sparse frames into a canonical 3D model.
Our framework is flexible, with potential applications going beyond shape reconstruction.
arXiv Detail & Related papers (2020-06-05T18:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.