NeRF in detail: Learning to sample for view synthesis
- URL: http://arxiv.org/abs/2106.05264v1
- Date: Wed, 9 Jun 2021 17:59:10 GMT
- Title: NeRF in detail: Learning to sample for view synthesis
- Authors: Relja Arandjelovi\'c, Andrew Zisserman
- Abstract summary: Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
- Score: 104.75126790300735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural radiance fields (NeRF) methods have demonstrated impressive novel view
synthesis performance. The core approach is to render individual rays by
querying a neural network at points sampled along the ray to obtain the density
and colour of the sampled points, and integrating this information using the
rendering equation. Since dense sampling is computationally prohibitive, a
common solution is to perform coarse-to-fine sampling.
In this work we address a clear limitation of the vanilla coarse-to-fine
approach -- that it is based on a heuristic and not trained end-to-end for the
task at hand. We introduce a differentiable module that learns to propose
samples and their importance for the fine network, and consider and compare
multiple alternatives for its neural architecture. Training the proposal module
from scratch can be unstable due to lack of supervision, so an effective
pre-training strategy is also put forward. The approach, named `NeRF in detail'
(NeRF-ID), achieves superior view synthesis quality over NeRF and the
state-of-the-art on the synthetic Blender benchmark and on par or better
performance on the real LLFF-NeRF scenes. Furthermore, by leveraging the
predicted sample importance, a 25% saving in computation can be achieved
without significantly sacrificing the rendering quality.
Related papers
- Efficient NeRF Optimization -- Not All Samples Remain Equally Hard [9.404889815088161]
We propose an application of online hard sample mining for efficient training of Neural Radiance Fields (NeRF)
NeRF models produce state-of-the-art quality for many 3D reconstruction and rendering tasks but require substantial computational resources.
arXiv Detail & Related papers (2024-08-06T13:49:01Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - NeRF-VPT: Learning Novel View Representations with Neural Radiance
Fields via View Prompt Tuning [63.39461847093663]
We propose NeRF-VPT, an innovative method for novel view synthesis to address these challenges.
Our proposed NeRF-VPT employs a cascading view prompt tuning paradigm, wherein RGB information gained from preceding rendering outcomes serves as instructive visual prompts for subsequent rendering stages.
NeRF-VPT only requires sampling RGB data from previous stage renderings as priors at each training stage, without relying on extra guidance or complex techniques.
arXiv Detail & Related papers (2024-03-02T22:08:10Z) - PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields [54.8553158441296]
We propose a novel visual localization framework, ie, PNeRFLoc, based on a unified point-based representation.
On the one hand, PNeRFLoc supports the initial pose estimation by matching 2D and 3D feature points.
On the other hand, it also enables pose refinement with novel view synthesis using rendering-based optimization.
arXiv Detail & Related papers (2023-12-17T08:30:00Z) - ProNeRF: Learning Efficient Projection-Aware Ray Sampling for
Fine-Grained Implicit Neural Radiance Fields [27.008124938806944]
We propose ProNeRF, which provides an optimal trade-off between memory footprint (similar to NeRF), speed (faster than HyperReel), and quality (better than K-Planes)
Our ProNeRF yields state-of-the-art metrics, being 15-23x faster with 0.65dB higher PSNR than NeRF and yielding 0.95dB higher PSNR than the best published sampler-based method, HyperReel.
arXiv Detail & Related papers (2023-12-13T13:37:32Z) - RL-based Stateful Neural Adaptive Sampling and Denoising for Real-Time
Path Tracing [1.534667887016089]
MonteCarlo path tracing is a powerful technique for realistic image synthesis but suffers from high levels of noise at low sample counts.
We propose a framework with end-to-end training of a sampling importance network, a latent space encoder network, and a denoiser network.
arXiv Detail & Related papers (2023-10-05T12:39:27Z) - Cascaded and Generalizable Neural Radiance Fields for Fast View
Synthesis [35.035125537722514]
We present CG-NeRF, a cascade and generalizable neural radiance fields method for view synthesis.
We first train CG-NeRF on multiple 3D scenes of the DTU dataset.
We show that CG-NeRF outperforms state-of-the-art generalizable neural rendering methods on various synthetic and real datasets.
arXiv Detail & Related papers (2022-08-09T12:23:48Z) - AdaNeRF: Adaptive Sampling for Real-time Rendering of Neural Radiance
Fields [8.214695794896127]
Novel view synthesis has recently been revolutionized by learning neural radiance fields directly from sparse observations.
rendering images with this new paradigm is slow due to the fact that an accurate quadrature of the volume rendering equation requires a large number of samples for each ray.
We propose a novel dual-network architecture that takes an direction by learning how to best reduce the number of required sample points.
arXiv Detail & Related papers (2022-07-21T05:59:13Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - NeuSample: Neural Sample Field for Efficient View Synthesis [129.10351459066501]
We propose a lightweight module which names a neural sample field.
The proposed sample field maps rays into sample distributions, which can be transformed into point coordinates and fed into radiance fields for volume rendering.
We show that NeuSample achieves better rendering quality than NeRF while enjoying a faster inference speed.
arXiv Detail & Related papers (2021-11-30T16:43:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.