Ray-Distance Volume Rendering for Neural Scene Reconstruction
- URL: http://arxiv.org/abs/2408.15524v1
- Date: Wed, 28 Aug 2024 04:19:14 GMT
- Title: Ray-Distance Volume Rendering for Neural Scene Reconstruction
- Authors: Ruihong Yin, Yunlu Chen, Sezer Karaoglu, Theo Gevers,
- Abstract summary: Existing methods in neural scene reconstruction utilize the Signed Distance Function (SDF) to model the density function.
In indoor scenes, the density computed from the SDF for a sampled point may not consistently reflect its real importance in volume rendering.
This work proposes a novel approach for indoor scene reconstruction, which instead parameterizes the density function with the Signed Ray Distance Function (SRDF)
- Score: 15.125703603989715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing methods in neural scene reconstruction utilize the Signed Distance Function (SDF) to model the density function. However, in indoor scenes, the density computed from the SDF for a sampled point may not consistently reflect its real importance in volume rendering, often due to the influence of neighboring objects. To tackle this issue, our work proposes a novel approach for indoor scene reconstruction, which instead parameterizes the density function with the Signed Ray Distance Function (SRDF). Firstly, the SRDF is predicted by the network and transformed to a ray-conditioned density function for volume rendering. We argue that the ray-specific SRDF only considers the surface along the camera ray, from which the derived density function is more consistent to the real occupancy than that from the SDF. Secondly, although SRDF and SDF represent different aspects of scene geometries, their values should share the same sign indicating the underlying spatial occupancy. Therefore, this work introduces a SRDF-SDF consistency loss to constrain the signs of the SRDF and SDF outputs. Thirdly, this work proposes a self-supervised visibility task, introducing the physical visibility geometry to the reconstruction task. The visibility task combines prior from predicted SRDF and SDF as pseudo labels, and contributes to generating more accurate 3D geometry. Our method implemented with different representations has been validated on indoor datasets, achieving improved performance in both reconstruction and view synthesis.
Related papers
- SplatSDF: Boosting Neural Implicit SDF via Gaussian Splatting Fusion [13.013832790126541]
We propose a novel neural implicit SDF called "SplatSDF" to fuse 3DGSandSDF-NeRF at an architecture level with significant boosts to geometric and photometric accuracy and convergence speed.
Our method outperforms state-of-the-art SDF-NeRF models on geometric and photometric evaluation by the time of submission.
arXiv Detail & Related papers (2024-11-23T06:35:19Z) - NeuRodin: A Two-stage Framework for High-Fidelity Neural Surface Reconstruction [63.85586195085141]
Signed Distance Function (SDF)-based volume rendering has demonstrated significant capabilities in surface reconstruction.
We introduce NeuRodin, a novel two-stage neural surface reconstruction framework.
NeuRodin achieves high-fidelity surface reconstruction and retains the flexible optimization characteristics of density-based methods.
arXiv Detail & Related papers (2024-08-19T17:36:35Z) - RaNeuS: Ray-adaptive Neural Surface Reconstruction [87.20343320266215]
We leverage a differentiable radiance field eg NeRF to reconstruct detailed 3D surfaces in addition to producing novel view renderings.
Considering that different methods formulate and optimize the projection from SDF to radiance field with a globally constant Eikonal regularization, we improve with a ray-wise weighting factor.
Our proposed textitRaNeuS are extensively evaluated on both synthetic and real datasets.
arXiv Detail & Related papers (2024-06-14T07:54:25Z) - DDF-HO: Hand-Held Object Reconstruction via Conditional Directed
Distance Field [82.81337273685176]
DDF-HO is a novel approach leveraging Directed Distance Field (DDF) as the shape representation.
We randomly sample multiple rays and collect local to global geometric features for them by introducing a novel 2D ray-based feature aggregation scheme.
Experiments on synthetic and real-world datasets demonstrate that DDF-HO consistently outperforms all baseline methods by a large margin.
arXiv Detail & Related papers (2023-08-16T09:06:32Z) - Learning a Room with the Occ-SDF Hybrid: Signed Distance Function
Mingled with Occupancy Aids Scene Representation [46.635542063913185]
Implicit neural rendering, which uses signed distance function representation with geometric priors, has led to impressive progress in the surface reconstruction of large-scale scenes.
We conduct experiments to identify limitations of the original color rendering loss and priors-embedded SDF scene representation.
We propose a feature-based color rendering loss that utilizes non-zero feature values to bring back optimization signals.
arXiv Detail & Related papers (2023-03-16T08:34:02Z) - NeuralUDF: Learning Unsigned Distance Fields for Multi-view
Reconstruction of Surfaces with Arbitrary Topologies [87.06532943371575]
We present a novel method, called NeuralUDF, for reconstructing surfaces with arbitrary topologies from 2D images via volume rendering.
In this paper, we propose to represent surfaces as the Unsigned Distance Function (UDF) and develop a new volume rendering scheme to learn the neural UDF representation.
arXiv Detail & Related papers (2022-11-25T15:21:45Z) - Differentiable Rendering of Neural SDFs through Reparameterization [32.47993049026182]
We present a method to automatically compute correct gradients with respect to geometric scene parameters in neural SDFs.
Our approach builds on area-sampling techniques and develops a continuous warping function for SDFs to account for discontinuities.
Our differentiable can be used to optimize neural shapes from multi-view images and produces comparable 3D reconstructions.
arXiv Detail & Related papers (2022-06-10T20:30:26Z) - Gradient-SDF: A Semi-Implicit Surface Representation for 3D
Reconstruction [53.315347543761426]
Gradient-SDF is a novel representation for 3D geometry that combines the advantages of implict and explicit representations.
By storing at every voxel both the signed distance field as well as its gradient vector field, we enhance the capability of implicit representations.
We show that (1) the Gradient-SDF allows us to perform direct SDF tracking from depth images, using efficient storage schemes like hash maps, and that (2) the Gradient-SDF representation enables us to perform photometric bundle adjustment directly in a voxel representation.
arXiv Detail & Related papers (2021-11-26T18:33:14Z) - A Deep Signed Directional Distance Function for Object Shape
Representation [12.741811850885309]
This paper develops a new shape model that allows novel distance views by optimizing a continuous signed directional distance function (SDDF)
Unlike an SDF, which measures distance to the nearest surface in any direction, an SDDF measures distance in a given direction.
Our model encodes by construction the property that SDDF values decrease linearly along the viewing direction.
arXiv Detail & Related papers (2021-07-23T04:11:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.