DONeRF: Towards Real-Time Rendering of Neural Radiance Fields using
Depth Oracle Networks
- URL: http://arxiv.org/abs/2103.03231v1
- Date: Thu, 4 Mar 2021 18:55:09 GMT
- Title: DONeRF: Towards Real-Time Rendering of Neural Radiance Fields using
Depth Oracle Networks
- Authors: Thomas Neff, Pascal Stadlbauer, Mathias Parger, Andreas Kurz,
Chakravarty R. Alla Chaitanya, Anton Kaplanyan, Markus Steinberger
- Abstract summary: DONeRF is a dual network design with a depth oracle network as a first step and a locally sampled shading network for ray accumulation.
We are the first to render raymarching-based neural representations at interactive frame rates (15 frames per second at 800x800) on a single GPU.
- Score: 6.2444658061424665
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The recent research explosion around Neural Radiance Fields (NeRFs) shows
that there is immense potential for implicitly storing scene and lighting
information in neural networks, e.g., for novel view generation. However, one
major limitation preventing the widespread use of NeRFs is the prohibitive
computational cost of excessive network evaluations along each view ray,
requiring dozens of petaFLOPS when aiming for real-time rendering on current
devices. We show that the number of samples required for each view ray can be
significantly reduced when local samples are placed around surfaces in the
scene. To this end, we propose a depth oracle network, which predicts ray
sample locations for each view ray with a single network evaluation. We show
that using a classification network around logarithmically discretized and
spherically warped depth values is essential to encode surface locations rather
than directly estimating depth. The combination of these techniques leads to
DONeRF, a dual network design with a depth oracle network as a first step and a
locally sampled shading network for ray accumulation. With our design, we
reduce the inference costs by up to 48x compared to NeRF. Using an
off-the-shelf inference API in combination with simple compute kernels, we are
the first to render raymarching-based neural representations at interactive
frame rates (15 frames per second at 800x800) on a single GPU. At the same
time, since we focus on the important parts of the scene around surfaces, we
achieve equal or better quality compared to NeRF.
Related papers
- SimpleNeRF: Regularizing Sparse Input Neural Radiance Fields with
Simpler Solutions [6.9980855647933655]
supervising the depth estimated by the NeRF helps train it effectively with fewer views.
We design augmented models that encourage simpler solutions by exploring the role of positional encoding and view-dependent radiance.
We achieve state-of-the-art view-synthesis performance on two popular datasets by employing the above regularizations.
arXiv Detail & Related papers (2023-09-07T18:02:57Z) - Efficient View Synthesis with Neural Radiance Distribution Field [61.22920276806721]
We propose a new representation called Neural Radiance Distribution Field (NeRDF) that targets efficient view synthesis in real-time.
We use a small network similar to NeRF while preserving the rendering speed with a single network forwarding per pixel as in NeLF.
Experiments show that our proposed method offers a better trade-off among speed, quality, and network size than existing methods.
arXiv Detail & Related papers (2023-08-22T02:23:28Z) - Volume Feature Rendering for Fast Neural Radiance Field Reconstruction [11.05302598034426]
Neural radiance fields (NeRFs) are able to synthesize realistic novel views from multi-view images captured from distinct positions and perspectives.
In NeRF's rendering pipeline, neural networks are used to represent a scene independently or transform queried learnable feature vector of a point to the expected color or density.
We propose to render the queried feature vectors of a ray first and then transform the rendered feature vector to the final pixel color by a neural network.
arXiv Detail & Related papers (2023-05-29T06:58:27Z) - R2L: Distilling Neural Radiance Field to Neural Light Field for
Efficient Novel View Synthesis [76.07010495581535]
Rendering a single pixel requires querying the Neural Radiance Field network hundreds of times.
NeLF presents a more straightforward representation over NeRF in novel view.
We show the key to successfully learning a deep NeLF network is to have sufficient data.
arXiv Detail & Related papers (2022-03-31T17:57:05Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - Light Field Networks: Neural Scene Representations with
Single-Evaluation Rendering [60.02806355570514]
Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence.
We propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field.
Rendering a ray from an LFN requires only a *single* network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or based on volumetrics.
arXiv Detail & Related papers (2021-06-04T17:54:49Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.