DIVeR: Real-time and Accurate Neural Radiance Fields with Deterministic
Integration for Volume Rendering
- URL: http://arxiv.org/abs/2111.10427v1
- Date: Fri, 19 Nov 2021 20:32:59 GMT
- Title: DIVeR: Real-time and Accurate Neural Radiance Fields with Deterministic
Integration for Volume Rendering
- Authors: Liwen Wu, Jae Yong Lee, Anand Bhattad, Yuxiong Wang, David Forsyth
- Abstract summary: DIVeR builds on the key ideas of NeRF and its variants -- density models and volume rendering -- to learn 3D object models that can be rendered realistically from small numbers of images.
In contrast to all previous NeRF methods, DIVeR uses deterministic rather than estimates of the volume rendering integral.
- Score: 11.05429980273764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: DIVeR builds on the key ideas of NeRF and its variants -- density models and
volume rendering -- to learn 3D object models that can be rendered
realistically from small numbers of images. In contrast to all previous NeRF
methods, DIVeR uses deterministic rather than stochastic estimates of the
volume rendering integral. DIVeR's representation is a voxel based field of
features. To compute the volume rendering integral, a ray is broken into
intervals, one per voxel; components of the volume rendering integral are
estimated from the features for each interval using an MLP, and the components
are aggregated. As a result, DIVeR can render thin translucent structures that
are missed by other integrators. Furthermore, DIVeR's representation has
semantics that is relatively exposed compared to other such methods -- moving
feature vectors around in the voxel space results in natural edits. Extensive
qualitative and quantitative comparisons to current state-of-the-art methods
show that DIVeR produces models that (1) render at or above state-of-the-art
quality, (2) are very small without being baked, (3) render very fast without
being baked, and (4) can be edited in natural ways.
Related papers
- Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation [51.346733271166926]
Mesh2NeRF is an approach to derive ground-truth radiance fields from textured meshes for 3D generation tasks.
We validate the effectiveness of Mesh2NeRF across various tasks.
arXiv Detail & Related papers (2024-03-28T11:22:53Z) - CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces [71.1071688018433]
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render.
We propose a method, HybridNeRF, that leverages the strengths of both representations by rendering most objects as surfaces.
We improve error rates by 15-30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2Kx2K)
arXiv Detail & Related papers (2023-12-05T22:04:49Z) - Rethinking Directional Integration in Neural Radiance Fields [8.012147983948665]
We introduce a modification to the NeRF rendering equation which is as simple as a few lines of code change for any NeRF variations.
We show that the modified equation can be interpreted as light field rendering with learned ray embeddings.
arXiv Detail & Related papers (2023-11-28T18:59:50Z) - NeRF Revisited: Fixing Quadrature Instability in Volume Rendering [8.82933411096762]
Volume rendering requires evaluating an integral along each ray, which is numerically approximated with a finite sum.
The rendered result is unstable w.r.t. the choice of samples along the ray, a phenomenon that we dub quadrature instability.
We propose a mathematically principled solution by reformulating the sample-based rendering equation so that it corresponds to the exact integral under piecewise linear volume density.
arXiv Detail & Related papers (2023-10-31T17:49:48Z) - Local Implicit Ray Function for Generalizable Radiance Field
Representation [20.67358742158244]
We propose LIRF (Local Implicit Ray Function), a generalizable neural rendering approach for novel view rendering.
Given 3D positions within conical frustums, LIRF takes 3D coordinates and the features of conical frustums as inputs and predicts a local volumetric radiance field.
Since the coordinates are continuous, LIRF renders high-quality novel views at a continuously-valued scale via volume rendering.
arXiv Detail & Related papers (2023-04-25T11:52:33Z) - IBL-NeRF: Image-Based Lighting Formulation of Neural Radiance Fields [12.056350920398396]
We propose IBL-NeRF, which decomposes the neural radiance fields (NeRF) of large-scale indoor scenes into intrinsic components.
Our approach inherits superior visual quality and multi-view consistency for synthesized images as well as the intrinsic components.
arXiv Detail & Related papers (2022-10-15T05:38:55Z) - Volume Rendering of Neural Implicit Surfaces [57.802056954935495]
This paper aims to improve geometry representation and reconstruction in neural volume rendering.
We achieve that by modeling the volume density as a function of the geometry.
Applying this new density representation to challenging scene multiview datasets produced high quality geometry reconstructions.
arXiv Detail & Related papers (2021-06-22T20:23:16Z) - UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for
Multi-View Reconstruction [61.17219252031391]
We present a novel method for reconstructing surfaces from multi-view images using Neural implicit 3D representations.
Our key insight is that implicit surface models and radiance fields can be formulated in a unified way, enabling both surface and volume rendering.
Our experiments demonstrate that we outperform NeRF in terms of reconstruction quality while performing on par with IDR without requiring masks.
arXiv Detail & Related papers (2021-04-20T15:59:38Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.