NeRF Revisited: Fixing Quadrature Instability in Volume Rendering
- URL: http://arxiv.org/abs/2310.20685v2
- Date: Fri, 19 Jan 2024 18:53:13 GMT
- Title: NeRF Revisited: Fixing Quadrature Instability in Volume Rendering
- Authors: Mikaela Angelina Uy, Kiyohiro Nakayama, Guandao Yang, Rahul Krishna
Thomas, Leonidas Guibas, Ke Li
- Abstract summary: Volume rendering requires evaluating an integral along each ray, which is numerically approximated with a finite sum.
The rendered result is unstable w.r.t. the choice of samples along the ray, a phenomenon that we dub quadrature instability.
We propose a mathematically principled solution by reformulating the sample-based rendering equation so that it corresponds to the exact integral under piecewise linear volume density.
- Score: 8.82933411096762
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural radiance fields (NeRF) rely on volume rendering to synthesize novel
views. Volume rendering requires evaluating an integral along each ray, which
is numerically approximated with a finite sum that corresponds to the exact
integral along the ray under piecewise constant volume density. As a
consequence, the rendered result is unstable w.r.t. the choice of samples along
the ray, a phenomenon that we dub quadrature instability. We propose a
mathematically principled solution by reformulating the sample-based rendering
equation so that it corresponds to the exact integral under piecewise linear
volume density. This simultaneously resolves multiple issues: conflicts between
samples along different rays, imprecise hierarchical sampling, and
non-differentiability of quantiles of ray termination distances w.r.t. model
parameters. We demonstrate several benefits over the classical sample-based
rendering equation, such as sharper textures, better geometric reconstruction,
and stronger depth supervision. Our proposed formulation can be also be used as
a drop-in replacement to the volume rendering equation of existing NeRF-based
methods. Our project page can be found at pl-nerf.github.io.
Related papers
- CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces [71.1071688018433]
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render.
We propose a method, HybridNeRF, that leverages the strengths of both representations by rendering most objects as surfaces.
We improve error rates by 15-30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2Kx2K)
arXiv Detail & Related papers (2023-12-05T22:04:49Z) - Adaptive Shells for Efficient Neural Radiance Field Rendering [92.18962730460842]
We propose a neural radiance formulation that smoothly transitions between- and surface-based rendering.
Our approach enables efficient rendering at very high fidelity.
We also demonstrate that the extracted envelope enables downstream applications such as animation and simulation.
arXiv Detail & Related papers (2023-11-16T18:58:55Z) - Differentiable Rendering with Reparameterized Volume Sampling [2.717399369766309]
In view synthesis, a neural radiance field approximates underlying density and radiance fields based on a sparse set of scene pictures.
This rendering algorithm is fully differentiable and facilitates gradient-based optimization of the fields.
We propose a simple end-to-end differentiable sampling algorithm based on inverse transform sampling.
arXiv Detail & Related papers (2023-02-21T19:56:50Z) - Sampling Neural Radiance Fields for Refractive Objects [8.539183778516795]
In this work, the scene is instead a heterogeneous volume with a piecewise-constant refractive index, where the path will be curved if it intersects the different refractive indices.
For novel view synthesis of refractive objects, our NeRF-based framework aims to optimize the radiance fields of bounded volume and boundary from multi-view posed images with refractive object silhouettes.
Given the refractive index, we extend the stratified and hierarchical sampling techniques in NeRF to allow drawing samples along a curved path tracked by the Eikonal equation.
arXiv Detail & Related papers (2022-11-27T11:43:21Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - DIVeR: Real-time and Accurate Neural Radiance Fields with Deterministic
Integration for Volume Rendering [11.05429980273764]
DIVeR builds on the key ideas of NeRF and its variants -- density models and volume rendering -- to learn 3D object models that can be rendered realistically from small numbers of images.
In contrast to all previous NeRF methods, DIVeR uses deterministic rather than estimates of the volume rendering integral.
arXiv Detail & Related papers (2021-11-19T20:32:59Z) - Volume Rendering of Neural Implicit Surfaces [57.802056954935495]
This paper aims to improve geometry representation and reconstruction in neural volume rendering.
We achieve that by modeling the volume density as a function of the geometry.
Applying this new density representation to challenging scene multiview datasets produced high quality geometry reconstructions.
arXiv Detail & Related papers (2021-06-22T20:23:16Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.