Volume Rendering of Neural Implicit Surfaces
- URL: http://arxiv.org/abs/2106.12052v1
- Date: Tue, 22 Jun 2021 20:23:16 GMT
- Title: Volume Rendering of Neural Implicit Surfaces
- Authors: Lior Yariv, Jiatao Gu, Yoni Kasten, Yaron Lipman
- Abstract summary: This paper aims to improve geometry representation and reconstruction in neural volume rendering.
We achieve that by modeling the volume density as a function of the geometry.
Applying this new density representation to challenging scene multiview datasets produced high quality geometry reconstructions.
- Score: 57.802056954935495
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural volume rendering became increasingly popular recently due to its
success in synthesizing novel views of a scene from a sparse set of input
images. So far, the geometry learned by neural volume rendering techniques was
modeled using a generic density function. Furthermore, the geometry itself was
extracted using an arbitrary level set of the density function leading to a
noisy, often low fidelity reconstruction. The goal of this paper is to improve
geometry representation and reconstruction in neural volume rendering. We
achieve that by modeling the volume density as a function of the geometry. This
is in contrast to previous work modeling the geometry as a function of the
volume density. In more detail, we define the volume density function as
Laplace's cumulative distribution function (CDF) applied to a signed distance
function (SDF) representation. This simple density representation has three
benefits: (i) it provides a useful inductive bias to the geometry learned in
the neural volume rendering process; (ii) it facilitates a bound on the opacity
approximation error, leading to an accurate sampling of the viewing ray.
Accurate sampling is important to provide a precise coupling of geometry and
radiance; and (iii) it allows efficient unsupervised disentanglement of shape
and appearance in volume rendering. Applying this new density representation to
challenging scene multiview datasets produced high quality geometry
reconstructions, outperforming relevant baselines. Furthermore, switching shape
and appearance between scenes is possible due to the disentanglement of the
two.
Related papers
- Q-SLAM: Quadric Representations for Monocular SLAM [89.05457684629621]
Monocular SLAM has long grappled with the challenge of accurately modeling 3D geometries.
Recent advances in Neural Radiance Fields (NeRF)-based monocular SLAM have shown promise.
We propose a novel approach that reimagines volumetric representations through the lens of quadric forms.
arXiv Detail & Related papers (2024-03-12T23:27:30Z) - Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based
View Synthesis [70.40950409274312]
We modify density fields to encourage them to converge towards surfaces, without compromising their ability to reconstruct thin structures.
We also develop a fusion-based meshing strategy followed by mesh simplification and appearance model fitting.
The compact meshes produced by our model can be rendered in real-time on mobile devices.
arXiv Detail & Related papers (2024-02-19T18:59:41Z) - Anti-Aliased Neural Implicit Surfaces with Encoding Level of Detail [54.03399077258403]
We present LoD-NeuS, an efficient neural representation for high-frequency geometry detail recovery and anti-aliased novel view rendering.
Our representation aggregates space features from a multi-convolved featurization within a conical frustum along a ray.
arXiv Detail & Related papers (2023-09-19T05:44:00Z) - Neural Wavelet-domain Diffusion for 3D Shape Generation, Inversion, and
Manipulation [54.09274684734721]
We present a new approach for 3D shape generation, inversion, and manipulation, through a direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
We may jointly train an encoder network to learn a latent space for inverting shapes, allowing us to enable a rich variety of whole-shape and region-aware shape manipulations.
arXiv Detail & Related papers (2023-02-01T02:47:53Z) - Recovering Fine Details for Neural Implicit Surface Reconstruction [3.9702081347126943]
We present D-NeuS, a volume rendering neural implicit surface reconstruction method capable to recover fine geometry details.
We impose multi-view feature consistency on the surface points, derived by interpolating SDF zero-crossings from sampled points along rays.
Our method reconstructs high-accuracy surfaces with details, and outperforms the state of the art.
arXiv Detail & Related papers (2022-11-21T10:06:09Z) - Neural Implicit Surface Reconstruction using Imaging Sonar [38.73010653104763]
We present a technique for dense 3D reconstruction of objects using an imaging sonar, also known as forward-looking sonar (FLS)
Compared to previous methods that model the scene geometry as point clouds or volumetric grids, we represent geometry as a neural implicit function.
We perform experiments on real and synthetic datasets and show that our algorithm reconstructs high-fidelity surface geometry from multi-view FLS images at much higher quality than was possible with previous techniques and without suffering from their associated memory overhead.
arXiv Detail & Related papers (2022-09-17T02:23:09Z) - Multi-View Reconstruction using Signed Ray Distance Functions (SRDF) [22.75986869918975]
We investigate a new computational approach that builds on a novel shape representation that is volumetric.
The shape energy associated to this representation evaluates 3D geometry given color images and does not need appearance prediction.
In practice we propose an implicit shape representation, the SRDF, based on signed distances which we parameterize by depths along camera rays.
arXiv Detail & Related papers (2022-08-31T19:32:17Z) - Improved surface reconstruction using high-frequency details [44.73668037810989]
We propose a novel method to improve the quality of surface reconstruction in neural rendering.
Our results show that our method can reconstruct high-frequency surface details and obtain better surface reconstruction quality than the current state of the art.
arXiv Detail & Related papers (2022-06-15T23:46:48Z) - Neural Convolutional Surfaces [59.172308741945336]
This work is concerned with a representation of shapes that disentangles fine, local and possibly repeating geometry, from global, coarse structures.
We show that this approach achieves better neural shape compression than the state of the art, as well as enabling manipulation and transfer of shape details.
arXiv Detail & Related papers (2022-04-05T15:40:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.