ProNeRF: Learning Efficient Projection-Aware Ray Sampling for
Fine-Grained Implicit Neural Radiance Fields
- URL: http://arxiv.org/abs/2312.08136v1
- Date: Wed, 13 Dec 2023 13:37:32 GMT
- Title: ProNeRF: Learning Efficient Projection-Aware Ray Sampling for
Fine-Grained Implicit Neural Radiance Fields
- Authors: Juan Luis Gonzalez Bello, Minh-Quan Viet Bui, and Munchurl Kim
- Abstract summary: We propose ProNeRF, which provides an optimal trade-off between memory footprint (similar to NeRF), speed (faster than HyperReel), and quality (better than K-Planes)
Our ProNeRF yields state-of-the-art metrics, being 15-23x faster with 0.65dB higher PSNR than NeRF and yielding 0.95dB higher PSNR than the best published sampler-based method, HyperReel.
- Score: 27.008124938806944
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent advances in neural rendering have shown that, albeit slow, implicit
compact models can learn a scene's geometries and view-dependent appearances
from multiple views. To maintain such a small memory footprint but achieve
faster inference times, recent works have adopted `sampler' networks that
adaptively sample a small subset of points along each ray in the implicit
neural radiance fields. Although these methods achieve up to a 10$\times$
reduction in rendering time, they still suffer from considerable quality
degradation compared to the vanilla NeRF. In contrast, we propose ProNeRF,
which provides an optimal trade-off between memory footprint (similar to NeRF),
speed (faster than HyperReel), and quality (better than K-Planes). ProNeRF is
equipped with a novel projection-aware sampling (PAS) network together with a
new training strategy for ray exploration and exploitation, allowing for
efficient fine-grained particle sampling. Our ProNeRF yields state-of-the-art
metrics, being 15-23x faster with 0.65dB higher PSNR than NeRF and yielding
0.95dB higher PSNR than the best published sampler-based method, HyperReel. Our
exploration and exploitation training strategy allows ProNeRF to learn the full
scenes' color and density distributions while also learning efficient ray
sampling focused on the highest-density regions. We provide extensive
experimental results that support the effectiveness of our method on the widely
adopted forward-facing and 360 datasets, LLFF and Blender, respectively.
Related papers
- Efficient NeRF Optimization -- Not All Samples Remain Equally Hard [9.404889815088161]
We propose an application of online hard sample mining for efficient training of Neural Radiance Fields (NeRF)
NeRF models produce state-of-the-art quality for many 3D reconstruction and rendering tasks but require substantial computational resources.
arXiv Detail & Related papers (2024-08-06T13:49:01Z) - Spatial Annealing Smoothing for Efficient Few-shot Neural Rendering [106.0057551634008]
We introduce an accurate and efficient few-shot neural rendering method named Spatial Annealing smoothing regularized NeRF (SANeRF)
By adding merely one line of code, SANeRF delivers superior rendering quality and much faster reconstruction speed compared to current few-shot NeRF methods.
arXiv Detail & Related papers (2024-06-12T02:48:52Z) - Efficient Ray Sampling for Radiance Fields Reconstruction [4.004168836949491]
ray sampling strategy profoundly impacts network convergence.
We propose a novel ray sampling approach for neural radiance fields.
Our method significantly outperforms state-of-the-art techniques on public benchmark datasets.
arXiv Detail & Related papers (2023-08-29T18:11:32Z) - Efficient View Synthesis with Neural Radiance Distribution Field [61.22920276806721]
We propose a new representation called Neural Radiance Distribution Field (NeRDF) that targets efficient view synthesis in real-time.
We use a small network similar to NeRF while preserving the rendering speed with a single network forwarding per pixel as in NeLF.
Experiments show that our proposed method offers a better trade-off among speed, quality, and network size than existing methods.
arXiv Detail & Related papers (2023-08-22T02:23:28Z) - NerfAcc: Efficient Sampling Accelerates NeRFs [44.372357157357875]
We show that improved sampling is generally applicable across NeRF variants under an unified concept of transmittance estimator.
We develop NerfAcc, a Python toolbox that provides flexible APIs for incorporating advanced sampling methods into NeRF related methods.
arXiv Detail & Related papers (2023-05-08T18:02:11Z) - AdaNeRF: Adaptive Sampling for Real-time Rendering of Neural Radiance
Fields [8.214695794896127]
Novel view synthesis has recently been revolutionized by learning neural radiance fields directly from sparse observations.
rendering images with this new paradigm is slow due to the fact that an accurate quadrature of the volume rendering equation requires a large number of samples for each ray.
We propose a novel dual-network architecture that takes an direction by learning how to best reduce the number of required sample points.
arXiv Detail & Related papers (2022-07-21T05:59:13Z) - R2L: Distilling Neural Radiance Field to Neural Light Field for
Efficient Novel View Synthesis [76.07010495581535]
Rendering a single pixel requires querying the Neural Radiance Field network hundreds of times.
NeLF presents a more straightforward representation over NeRF in novel view.
We show the key to successfully learning a deep NeLF network is to have sufficient data.
arXiv Detail & Related papers (2022-03-31T17:57:05Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - NeuSample: Neural Sample Field for Efficient View Synthesis [129.10351459066501]
We propose a lightweight module which names a neural sample field.
The proposed sample field maps rays into sample distributions, which can be transformed into point coordinates and fed into radiance fields for volume rendering.
We show that NeuSample achieves better rendering quality than NeRF while enjoying a faster inference speed.
arXiv Detail & Related papers (2021-11-30T16:43:49Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.