DDF-HO: Hand-Held Object Reconstruction via Conditional Directed
Distance Field
- URL: http://arxiv.org/abs/2308.08231v3
- Date: Thu, 26 Oct 2023 07:41:12 GMT
- Title: DDF-HO: Hand-Held Object Reconstruction via Conditional Directed
Distance Field
- Authors: Chenyangguang Zhang, Yan Di, Ruida Zhang, Guangyao Zhai, Fabian
Manhardt, Federico Tombari and Xiangyang Ji
- Abstract summary: DDF-HO is a novel approach leveraging Directed Distance Field (DDF) as the shape representation.
We randomly sample multiple rays and collect local to global geometric features for them by introducing a novel 2D ray-based feature aggregation scheme.
Experiments on synthetic and real-world datasets demonstrate that DDF-HO consistently outperforms all baseline methods by a large margin.
- Score: 82.81337273685176
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reconstructing hand-held objects from a single RGB image is an important and
challenging problem. Existing works utilizing Signed Distance Fields (SDF)
reveal limitations in comprehensively capturing the complex hand-object
interactions, since SDF is only reliable within the proximity of the target,
and hence, infeasible to simultaneously encode local hand and object cues. To
address this issue, we propose DDF-HO, a novel approach leveraging Directed
Distance Field (DDF) as the shape representation. Unlike SDF, DDF maps a ray in
3D space, consisting of an origin and a direction, to corresponding DDF values,
including a binary visibility signal determining whether the ray intersects the
objects and a distance value measuring the distance from origin to target in
the given direction. We randomly sample multiple rays and collect local to
global geometric features for them by introducing a novel 2D ray-based feature
aggregation scheme and a 3D intersection-aware hand pose embedding, combining
2D-3D features to model hand-object interactions. Extensive experiments on
synthetic and real-world datasets demonstrate that DDF-HO consistently
outperforms all baseline methods by a large margin, especially under Chamfer
Distance, with about 80% leap forward. Codes are available at
https://github.com/ZhangCYG/DDFHO.
Related papers
- Probabilistic Directed Distance Fields for Ray-Based Shape Representations [8.134429779950658]
Directed Distance Fields (DDFs) are a novel neural shape representation that builds upon classical distance fields.
We show how to model inherent discontinuities in the underlying field.
We then apply DDFs to several applications, including single-shape fitting, generative modelling, and single-image 3D reconstruction.
arXiv Detail & Related papers (2024-04-13T21:02:49Z) - Unsigned Orthogonal Distance Fields: An Accurate Neural Implicit Representation for Diverse 3D Shapes [29.65562721329593]
In this paper, we introduce a novel neural implicit representation based on unsigned distance fields (UDFs)
In UODFs, the minimal unsigned distance from any spatial point to the shape surface is defined solely in one direction, contrasting with the multi-directional determination made by SDF and UDF.
We verify the effectiveness of UODFs through a range of reconstruction examples, extending from watertight or non-watertight shapes to complex shapes.
arXiv Detail & Related papers (2024-03-03T06:58:35Z) - HOISDF: Constraining 3D Hand-Object Pose Estimation with Global Signed
Distance Fields [96.04424738803667]
HOISDF is a guided hand-object pose estimation network.
It exploits hand and object SDFs to provide a global, implicit representation over the complete reconstruction volume.
We show that HOISDF achieves state-of-the-art results on hand-object pose estimation benchmarks.
arXiv Detail & Related papers (2024-02-26T22:48:37Z) - Measuring the Discrepancy between 3D Geometric Models using Directional
Distance Fields [98.15456815880911]
We propose DirDist, an efficient, effective, robust, and differentiable distance metric for 3D geometry data.
As a generic distance metric, DirDist has the potential to advance the field of 3D geometric modeling.
arXiv Detail & Related papers (2024-01-18T05:31:53Z) - AlignSDF: Pose-Aligned Signed Distance Fields for Hand-Object
Reconstruction [76.12874759788298]
We propose a joint learning framework that disentangles the pose and the shape.
We show that such aligned SDFs better focus on reconstructing shape details and improve reconstruction accuracy both for hands and objects.
arXiv Detail & Related papers (2022-07-26T13:58:59Z) - iSDF: Real-Time Neural Signed Distance Fields for Robot Perception [64.80458128766254]
iSDF is a continuous learning system for real-time signed distance field reconstruction.
It produces more accurate reconstructions and better approximations of collision costs and gradients.
arXiv Detail & Related papers (2022-04-05T15:48:39Z) - FIRe: Fast Inverse Rendering using Directional and Signed Distance
Functions [97.5540646069663]
We introduce a novel neural scene representation that we call the directional distance function (DDF)
Our DDF is defined on the unit sphere and predicts the distance to the surface along any given direction.
Based on our DDF, we present a novel fast algorithm (FIRe) to reconstruct 3D shapes given a posed depth map.
arXiv Detail & Related papers (2022-03-30T13:24:04Z) - High-fidelity 3D Model Compression based on Key Spheres [6.59007277780362]
We propose an SDF prediction network using explicit key spheres as input.
Our method achieves the high-fidelity and high-compression 3D object coding and reconstruction.
arXiv Detail & Related papers (2022-01-19T09:21:54Z) - A Deep Signed Directional Distance Function for Object Shape
Representation [12.741811850885309]
This paper develops a new shape model that allows novel distance views by optimizing a continuous signed directional distance function (SDDF)
Unlike an SDF, which measures distance to the nearest surface in any direction, an SDDF measures distance in a given direction.
Our model encodes by construction the property that SDDF values decrease linearly along the viewing direction.
arXiv Detail & Related papers (2021-07-23T04:11:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.