AlignSDF: Pose-Aligned Signed Distance Fields for Hand-Object
Reconstruction
- URL: http://arxiv.org/abs/2207.12909v1
- Date: Tue, 26 Jul 2022 13:58:59 GMT
- Title: AlignSDF: Pose-Aligned Signed Distance Fields for Hand-Object
Reconstruction
- Authors: Zerui Chen, Yana Hasson, Cordelia Schmid, Ivan Laptev
- Abstract summary: We propose a joint learning framework that disentangles the pose and the shape.
We show that such aligned SDFs better focus on reconstructing shape details and improve reconstruction accuracy both for hands and objects.
- Score: 76.12874759788298
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work achieved impressive progress towards joint reconstruction of
hands and manipulated objects from monocular color images. Existing methods
focus on two alternative representations in terms of either parametric meshes
or signed distance fields (SDFs). On one side, parametric models can benefit
from prior knowledge at the cost of limited shape deformations and mesh
resolutions. Mesh models, hence, may fail to precisely reconstruct details such
as contact surfaces of hands and objects. SDF-based methods, on the other side,
can represent arbitrary details but are lacking explicit priors. In this work
we aim to improve SDF models using priors provided by parametric
representations. In particular, we propose a joint learning framework that
disentangles the pose and the shape. We obtain hand and object poses from
parametric models and use them to align SDFs in 3D space. We show that such
aligned SDFs better focus on reconstructing shape details and improve
reconstruction accuracy both for hands and objects. We evaluate our method and
demonstrate significant improvements over the state of the art on the
challenging ObMan and DexYCB benchmarks.
Related papers
- HOISDF: Constraining 3D Hand-Object Pose Estimation with Global Signed
Distance Fields [96.04424738803667]
HOISDF is a guided hand-object pose estimation network.
It exploits hand and object SDFs to provide a global, implicit representation over the complete reconstruction volume.
We show that HOISDF achieves state-of-the-art results on hand-object pose estimation benchmarks.
arXiv Detail & Related papers (2024-02-26T22:48:37Z) - DDF-HO: Hand-Held Object Reconstruction via Conditional Directed
Distance Field [82.81337273685176]
DDF-HO is a novel approach leveraging Directed Distance Field (DDF) as the shape representation.
We randomly sample multiple rays and collect local to global geometric features for them by introducing a novel 2D ray-based feature aggregation scheme.
Experiments on synthetic and real-world datasets demonstrate that DDF-HO consistently outperforms all baseline methods by a large margin.
arXiv Detail & Related papers (2023-08-16T09:06:32Z) - gSDF: Geometry-Driven Signed Distance Functions for 3D Hand-Object
Reconstruction [94.46581592405066]
We exploit the hand structure and use it as guidance for SDF-based shape reconstruction.
We predict kinematic chains of pose transformations and align SDFs with highly-articulated hand poses.
arXiv Detail & Related papers (2023-04-24T10:05:48Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - What's in your hands? 3D Reconstruction of Generic Objects in Hands [49.12461675219253]
Our work aims to reconstruct hand-held objects given a single RGB image.
In contrast to prior works that typically assume known 3D templates and reduce the problem to 3D pose estimation, our work reconstructs generic hand-held object without knowing their 3D templates.
arXiv Detail & Related papers (2022-04-14T17:59:02Z) - Disentangled Implicit Shape and Pose Learning for Scalable 6D Pose
Estimation [44.8872454995923]
We present a novel approach for scalable 6D pose estimation, by self-supervised learning on synthetic data of multiple objects using a single autoencoder.
We test our method on two multi-object benchmarks with real data, T-LESS and NOCS REAL275, and show it outperforms existing RGB-based methods in terms of pose estimation accuracy and generalization.
arXiv Detail & Related papers (2021-07-27T01:55:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.