Gradient-SDF: A Semi-Implicit Surface Representation for 3D
Reconstruction
- URL: http://arxiv.org/abs/2111.13652v1
- Date: Fri, 26 Nov 2021 18:33:14 GMT
- Title: Gradient-SDF: A Semi-Implicit Surface Representation for 3D
Reconstruction
- Authors: Christiane Sommer, Lu Sang, David Schubert, Daniel Cremers
- Abstract summary: Gradient-SDF is a novel representation for 3D geometry that combines the advantages of implict and explicit representations.
By storing at every voxel both the signed distance field as well as its gradient vector field, we enhance the capability of implicit representations.
We show that (1) the Gradient-SDF allows us to perform direct SDF tracking from depth images, using efficient storage schemes like hash maps, and that (2) the Gradient-SDF representation enables us to perform photometric bundle adjustment directly in a voxel representation.
- Score: 53.315347543761426
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Gradient-SDF, a novel representation for 3D geometry that combines
the advantages of implict and explicit representations. By storing at every
voxel both the signed distance field as well as its gradient vector field, we
enhance the capability of implicit representations with approaches originally
formulated for explicit surfaces. As concrete examples, we show that (1) the
Gradient-SDF allows us to perform direct SDF tracking from depth images, using
efficient storage schemes like hash maps, and that (2) the Gradient-SDF
representation enables us to perform photometric bundle adjustment directly in
a voxel representation (without transforming into a point cloud or mesh),
naturally a fully implicit optimization of geometry and camera poses and easy
geometry upsampling. Experimental results confirm that this leads to
significantly sharper reconstructions. Since the overall SDF voxel structure is
still respected, the proposed Gradient-SDF is equally suited for (GPU)
parallelization as related approaches.
Related papers
- Gradient Distance Function [52.615859148238464]
We show that Gradient Distance Functions (GDFs) can be differentiable at the surface while still being able to represent open surfaces.
This is done by associating to each 3D point a 3D vector whose norm is taken to be the unsigned distance to the surface.
We demonstrate the effectiveness of GDFs on ShapeNet Car, Multi-Garment, and 3D-Scene datasets.
arXiv Detail & Related papers (2024-10-29T18:04:01Z) - GS-Octree: Octree-based 3D Gaussian Splatting for Robust Object-level 3D Reconstruction Under Strong Lighting [4.255847344539736]
We introduce a novel approach that combines octree-based implicit surface representations with Gaussian splatting.
Our method, which leverages the distribution of 3D Gaussians with SDFs, reconstructs more accurate geometry, particularly in images with specular highlights caused by strong lighting.
arXiv Detail & Related papers (2024-06-26T09:29:56Z) - RaNeuS: Ray-adaptive Neural Surface Reconstruction [87.20343320266215]
We leverage a differentiable radiance field eg NeRF to reconstruct detailed 3D surfaces in addition to producing novel view renderings.
Considering that different methods formulate and optimize the projection from SDF to radiance field with a globally constant Eikonal regularization, we improve with a ray-wise weighting factor.
Our proposed textitRaNeuS are extensively evaluated on both synthetic and real datasets.
arXiv Detail & Related papers (2024-06-14T07:54:25Z) - Probabilistic Directed Distance Fields for Ray-Based Shape Representations [8.134429779950658]
Directed Distance Fields (DDFs) are a novel neural shape representation that builds upon classical distance fields.
We show how to model inherent discontinuities in the underlying field.
We then apply DDFs to several applications, including single-shape fitting, generative modelling, and single-image 3D reconstruction.
arXiv Detail & Related papers (2024-04-13T21:02:49Z) - DDF-HO: Hand-Held Object Reconstruction via Conditional Directed
Distance Field [82.81337273685176]
DDF-HO is a novel approach leveraging Directed Distance Field (DDF) as the shape representation.
We randomly sample multiple rays and collect local to global geometric features for them by introducing a novel 2D ray-based feature aggregation scheme.
Experiments on synthetic and real-world datasets demonstrate that DDF-HO consistently outperforms all baseline methods by a large margin.
arXiv Detail & Related papers (2023-08-16T09:06:32Z) - Differentiable Rendering of Neural SDFs through Reparameterization [32.47993049026182]
We present a method to automatically compute correct gradients with respect to geometric scene parameters in neural SDFs.
Our approach builds on area-sampling techniques and develops a continuous warping function for SDFs to account for discontinuities.
Our differentiable can be used to optimize neural shapes from multi-view images and produces comparable 3D reconstructions.
arXiv Detail & Related papers (2022-06-10T20:30:26Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - Coupling Explicit and Implicit Surface Representations for Generative 3D
Modeling [41.79675639550555]
We propose a novel neural architecture for representing 3D surfaces, which harnesses two complementary shape representations.
We make these two representations synergistic by introducing novel consistency losses.
Our hybrid architecture outputs results are superior to the output of the two equivalent single-representation networks.
arXiv Detail & Related papers (2020-07-20T17:24:51Z) - Geometric Correspondence Fields: Learned Differentiable Rendering for 3D
Pose Refinement in the Wild [96.09941542587865]
We present a novel 3D pose refinement approach based on differentiable rendering for objects of arbitrary categories in the wild.
In this way, we precisely align 3D models to objects in RGB images which results in significantly improved 3D pose estimates.
We evaluate our approach on the challenging Pix3D dataset and achieve up to 55% relative improvement compared to state-of-the-art refinement methods in multiple metrics.
arXiv Detail & Related papers (2020-07-17T12:34:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.