gSDF: Geometry-Driven Signed Distance Functions for 3D Hand-Object
Reconstruction
- URL: http://arxiv.org/abs/2304.11970v1
- Date: Mon, 24 Apr 2023 10:05:48 GMT
- Title: gSDF: Geometry-Driven Signed Distance Functions for 3D Hand-Object
Reconstruction
- Authors: Zerui Chen, Shizhe Chen, Cordelia Schmid, Ivan Laptev
- Abstract summary: We exploit the hand structure and use it as guidance for SDF-based shape reconstruction.
We predict kinematic chains of pose transformations and align SDFs with highly-articulated hand poses.
- Score: 94.46581592405066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Signed distance functions (SDFs) is an attractive framework that has recently
shown promising results for 3D shape reconstruction from images. SDFs
seamlessly generalize to different shape resolutions and topologies but lack
explicit modelling of the underlying 3D geometry. In this work, we exploit the
hand structure and use it as guidance for SDF-based shape reconstruction. In
particular, we address reconstruction of hands and manipulated objects from
monocular RGB images. To this end, we estimate poses of hands and objects and
use them to guide 3D reconstruction. More specifically, we predict kinematic
chains of pose transformations and align SDFs with highly-articulated hand
poses. We improve the visual features of 3D points with geometry alignment and
further leverage temporal information to enhance the robustness to occlusion
and motion blurs. We conduct extensive experiments on the challenging ObMan and
DexYCB benchmarks and demonstrate significant improvements of the proposed
method over the state of the art.
Related papers
- EasyHOI: Unleashing the Power of Large Models for Reconstructing Hand-Object Interactions in the Wild [79.71523320368388]
Our work aims to reconstruct hand-object interactions from a single-view image.
We first design a novel pipeline to estimate the underlying hand pose and object shape.
With the initial reconstruction, we employ a prior-guided optimization scheme.
arXiv Detail & Related papers (2024-11-21T16:33:35Z) - 3D Points Splatting for Real-Time Dynamic Hand Reconstruction [13.392046706568275]
3D Points Splatting Hand Reconstruction (3D-PSHR) is a real-time and photo-realistic hand reconstruction approach.
We propose a self-adaptive canonical points up strategy to achieve high-resolution hand geometry representation.
To model texture, we disentangle the appearance color into the intrinsic albedo and pose-aware shading.
arXiv Detail & Related papers (2023-12-21T11:50:49Z) - UniSDF: Unifying Neural Representations for High-Fidelity 3D
Reconstruction of Complex Scenes with Reflections [92.38975002642455]
We propose UniSDF, a general purpose 3D reconstruction method that can reconstruct large complex scenes with reflections.
Our method is able to robustly reconstruct complex large-scale scenes with fine details and reflective surfaces.
arXiv Detail & Related papers (2023-12-20T18:59:42Z) - UV-Based 3D Hand-Object Reconstruction with Grasp Optimization [23.06364591130636]
We propose a novel framework for 3D hand shape reconstruction and hand-object grasp optimization from a single RGB image.
Instead of approximating the contact regions with sparse points, we propose a dense representation in the form of a UV coordinate map.
Our pipeline increases hand shape reconstruction accuracy and produces a vibrant hand texture.
arXiv Detail & Related papers (2022-11-24T05:59:23Z) - AlignSDF: Pose-Aligned Signed Distance Fields for Hand-Object
Reconstruction [76.12874759788298]
We propose a joint learning framework that disentangles the pose and the shape.
We show that such aligned SDFs better focus on reconstructing shape details and improve reconstruction accuracy both for hands and objects.
arXiv Detail & Related papers (2022-07-26T13:58:59Z) - What's in your hands? 3D Reconstruction of Generic Objects in Hands [49.12461675219253]
Our work aims to reconstruct hand-held objects given a single RGB image.
In contrast to prior works that typically assume known 3D templates and reduce the problem to 3D pose estimation, our work reconstructs generic hand-held object without knowing their 3D templates.
arXiv Detail & Related papers (2022-04-14T17:59:02Z) - Model-based 3D Hand Reconstruction via Self-Supervised Learning [72.0817813032385]
Reconstructing a 3D hand from a single-view RGB image is challenging due to various hand configurations and depth ambiguity.
We propose S2HAND, a self-supervised 3D hand reconstruction network that can jointly estimate pose, shape, texture, and the camera viewpoint.
For the first time, we demonstrate the feasibility of training an accurate 3D hand reconstruction network without relying on manual annotations.
arXiv Detail & Related papers (2021-03-22T10:12:43Z) - Reconstruct, Rasterize and Backprop: Dense shape and pose estimation
from a single image [14.9851111159799]
This paper presents a new system to obtain dense object reconstructions along with 6-DoF poses from a single image.
We leverage recent advances in differentiable rendering (in particular, robotics) to close the loop with 3D reconstruction in camera frame.
arXiv Detail & Related papers (2020-04-25T20:53:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.