Differentiable Rendering of Neural SDFs through Reparameterization
- URL: http://arxiv.org/abs/2206.05344v1
- Date: Fri, 10 Jun 2022 20:30:26 GMT
- Title: Differentiable Rendering of Neural SDFs through Reparameterization
- Authors: Sai Praveen Bangaru, Micha\"el Gharbi, Tzu-Mao Li, Fujun Luan, Kalyan
Sunkavalli, Milo\v{s} Ha\v{s}an, Sai Bi, Zexiang Xu, Gilbert Bernstein and
Fr\'edo Durand
- Abstract summary: We present a method to automatically compute correct gradients with respect to geometric scene parameters in neural SDFs.
Our approach builds on area-sampling techniques and develops a continuous warping function for SDFs to account for discontinuities.
Our differentiable can be used to optimize neural shapes from multi-view images and produces comparable 3D reconstructions.
- Score: 32.47993049026182
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a method to automatically compute correct gradients with respect
to geometric scene parameters in neural SDF renderers. Recent physically-based
differentiable rendering techniques for meshes have used edge-sampling to
handle discontinuities, particularly at object silhouettes, but SDFs do not
have a simple parametric form amenable to sampling. Instead, our approach
builds on area-sampling techniques and develops a continuous warping function
for SDFs to account for these discontinuities. Our method leverages the
distance to surface encoded in an SDF and uses quadrature on sphere tracer
points to compute this warping function. We further show that this can be done
by subsampling the points to make the method tractable for neural SDFs. Our
differentiable renderer can be used to optimize neural shapes from multi-view
images and produces comparable 3D reconstructions to recent SDF-based inverse
rendering methods, without the need for 2D segmentation masks to guide the
geometry optimization and no volumetric approximations to the geometry.
Related papers
- Neural Signed Distance Function Inference through Splatting 3D Gaussians Pulled on Zero-Level Set [49.780302894956776]
It is vital to infer a signed distance function (SDF) in multi-view based surface reconstruction.
We propose a method that seamlessly merge 3DGS with the learning of neural SDFs.
Our numerical and visual comparisons show our superiority over the state-of-the-art results on the widely used benchmarks.
arXiv Detail & Related papers (2024-10-18T05:48:06Z) - Shrinking: Reconstruction of Parameterized Surfaces from Signed Distance Fields [2.1638817206926855]
We propose a novel method for reconstructing explicit parameterized surfaces from Signed Distance Fields (SDFs)
Our approach iteratively contracts a parameterized initial sphere to conform to the target SDF shape, preserving differentiability and surface parameterization throughout.
This enables downstream applications such as texture mapping, geometry processing, animation, and finite element analysis.
arXiv Detail & Related papers (2024-10-04T03:39:15Z) - Learning Unsigned Distance Fields from Local Shape Functions for 3D Surface Reconstruction [42.840655419509346]
This paper presents a novel neural framework, LoSF-UDF, for reconstructing surfaces from 3D point clouds by leveraging local shape functions to learn UDFs.
We observe that 3D shapes manifest simple patterns within localized areas, prompting us to create a training dataset of point cloud patches.
Our approach learns features within a specific radius around each query point and utilizes an attention mechanism to focus on the crucial features for UDF estimation.
arXiv Detail & Related papers (2024-07-01T14:39:03Z) - RaNeuS: Ray-adaptive Neural Surface Reconstruction [87.20343320266215]
We leverage a differentiable radiance field eg NeRF to reconstruct detailed 3D surfaces in addition to producing novel view renderings.
Considering that different methods formulate and optimize the projection from SDF to radiance field with a globally constant Eikonal regularization, we improve with a ray-wise weighting factor.
Our proposed textitRaNeuS are extensively evaluated on both synthetic and real datasets.
arXiv Detail & Related papers (2024-06-14T07:54:25Z) - Probabilistic Directed Distance Fields for Ray-Based Shape Representations [8.134429779950658]
Directed Distance Fields (DDFs) are a novel neural shape representation that builds upon classical distance fields.
We show how to model inherent discontinuities in the underlying field.
We then apply DDFs to several applications, including single-shape fitting, generative modelling, and single-image 3D reconstruction.
arXiv Detail & Related papers (2024-04-13T21:02:49Z) - Differentiable Rendering for Pose Estimation in Proximity Operations [4.282159812965446]
Differentiable rendering aims to compute the derivative of the image rendering function with respect to the rendering parameters.
This paper presents a novel algorithm for 6-DoF pose estimation using a differentiable rendering pipeline.
arXiv Detail & Related papers (2022-12-24T06:12:16Z) - NeuralUDF: Learning Unsigned Distance Fields for Multi-view
Reconstruction of Surfaces with Arbitrary Topologies [87.06532943371575]
We present a novel method, called NeuralUDF, for reconstructing surfaces with arbitrary topologies from 2D images via volume rendering.
In this paper, we propose to represent surfaces as the Unsigned Distance Function (UDF) and develop a new volume rendering scheme to learn the neural UDF representation.
arXiv Detail & Related papers (2022-11-25T15:21:45Z) - IRON: Inverse Rendering by Optimizing Neural SDFs and Materials from
Photometric Images [52.021529273866896]
We propose a neural inverse rendering pipeline called IRON that operates on photometric images and outputs high-quality 3D content.
Our method adopts neural representations for geometry as signed distance fields (SDFs) and materials during optimization to enjoy their flexibility and compactness.
We show that our IRON achieves significantly better inverse rendering quality compared to prior works.
arXiv Detail & Related papers (2022-04-05T14:14:18Z) - Implicit Neural Deformation for Multi-View Face Reconstruction [43.88676778013593]
We present a new method for 3D face reconstruction from multi-view RGB images.
Unlike previous methods which are built upon 3D morphable models, our method leverages an implicit representation to encode rich geometric features.
Our experimental results on several benchmark datasets demonstrate that our approach outperforms alternative baselines and achieves superior face reconstruction results compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-05T07:02:53Z) - Gradient-SDF: A Semi-Implicit Surface Representation for 3D
Reconstruction [53.315347543761426]
Gradient-SDF is a novel representation for 3D geometry that combines the advantages of implict and explicit representations.
By storing at every voxel both the signed distance field as well as its gradient vector field, we enhance the capability of implicit representations.
We show that (1) the Gradient-SDF allows us to perform direct SDF tracking from depth images, using efficient storage schemes like hash maps, and that (2) the Gradient-SDF representation enables us to perform photometric bundle adjustment directly in a voxel representation.
arXiv Detail & Related papers (2021-11-26T18:33:14Z) - Geometric Correspondence Fields: Learned Differentiable Rendering for 3D
Pose Refinement in the Wild [96.09941542587865]
We present a novel 3D pose refinement approach based on differentiable rendering for objects of arbitrary categories in the wild.
In this way, we precisely align 3D models to objects in RGB images which results in significantly improved 3D pose estimates.
We evaluate our approach on the challenging Pix3D dataset and achieve up to 55% relative improvement compared to state-of-the-art refinement methods in multiple metrics.
arXiv Detail & Related papers (2020-07-17T12:34:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.