Volume Feature Rendering for Fast Neural Radiance Field Reconstruction
- URL: http://arxiv.org/abs/2305.17916v2
- Date: Wed, 31 May 2023 07:03:11 GMT
- Title: Volume Feature Rendering for Fast Neural Radiance Field Reconstruction
- Authors: Kang Han, Wei Xiang and Lu Yu
- Abstract summary: Neural radiance fields (NeRFs) are able to synthesize realistic novel views from multi-view images captured from distinct positions and perspectives.
In NeRF's rendering pipeline, neural networks are used to represent a scene independently or transform queried learnable feature vector of a point to the expected color or density.
We propose to render the queried feature vectors of a ray first and then transform the rendered feature vector to the final pixel color by a neural network.
- Score: 11.05302598034426
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural radiance fields (NeRFs) are able to synthesize realistic novel views
from multi-view images captured from distinct positions and perspectives. In
NeRF's rendering pipeline, neural networks are used to represent a scene
independently or transform queried learnable feature vector of a point to the
expected color or density. With the aid of geometry guides either in occupancy
grids or proposal networks, the number of neural network evaluations can be
reduced from hundreds to dozens in the standard volume rendering framework.
Instead of rendering yielded color after neural network evaluation, we propose
to render the queried feature vectors of a ray first and then transform the
rendered feature vector to the final pixel color by a neural network. This
fundamental change to the standard volume rendering framework requires only one
single neural network evaluation to render a pixel, which substantially lowers
the high computational complexity of the rendering framework attributed to a
large number of neural network evaluations. Consequently, we can use a
comparably larger neural network to achieve a better rendering quality while
maintaining the same training and rendering time costs. Our model achieves the
state-of-the-art rendering quality on both synthetic and real-world datasets
while requiring a training time of several minutes.
Related papers
- NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - LiveNVS: Neural View Synthesis on Live RGB-D Streams [4.717325308876748]
We present LiveNVS, a system that allows for neural novel view synthesis on a live RGB-D input stream.
LiveNVS achieves state-of-the-art neural rendering quality of unknown scenes during capturing.
arXiv Detail & Related papers (2023-11-28T10:29:39Z) - AdaNeRF: Adaptive Sampling for Real-time Rendering of Neural Radiance
Fields [8.214695794896127]
Novel view synthesis has recently been revolutionized by learning neural radiance fields directly from sparse observations.
rendering images with this new paradigm is slow due to the fact that an accurate quadrature of the volume rendering equation requires a large number of samples for each ray.
We propose a novel dual-network architecture that takes an direction by learning how to best reduce the number of required sample points.
arXiv Detail & Related papers (2022-07-21T05:59:13Z) - Neural Adaptive SCEne Tracing [24.781844909539686]
We present NAScenT, the first neural rendering method based on directly training a hybrid explicit-implicit neural representation.
NAScenT is capable of reconstructing challenging scenes including both large, sparsely populated volumes like UAV captured outdoor environments.
arXiv Detail & Related papers (2022-02-28T10:27:23Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - Fast Training of Neural Lumigraph Representations using Meta Learning [109.92233234681319]
We develop a new neural rendering approach with the goal of quickly learning a high-quality representation which can also be rendered in real-time.
Our approach, MetaNLR++, accomplishes this by using a unique combination of a neural shape representation and 2D CNN-based image feature extraction, aggregation, and re-projection.
We show that MetaNLR++ achieves similar or better photorealistic novel view synthesis results in a fraction of the time that competing methods require.
arXiv Detail & Related papers (2021-06-28T18:55:50Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z) - Light Field Networks: Neural Scene Representations with
Single-Evaluation Rendering [60.02806355570514]
Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence.
We propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field.
Rendering a ray from an LFN requires only a *single* network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or based on volumetrics.
arXiv Detail & Related papers (2021-06-04T17:54:49Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z) - TRANSPR: Transparency Ray-Accumulating Neural 3D Scene Point Renderer [6.320273914694594]
We propose and evaluate a neural point-based graphics method that can model semi-transparent scene parts.
We show that novel views of semi-transparent point cloud scenes can be generated after training with our approach.
arXiv Detail & Related papers (2020-09-06T21:19:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.