NerfAcc: Efficient Sampling Accelerates NeRFs
- URL: http://arxiv.org/abs/2305.04966v2
- Date: Tue, 24 Oct 2023 21:30:16 GMT
- Title: NerfAcc: Efficient Sampling Accelerates NeRFs
- Authors: Ruilong Li, Hang Gao, Matthew Tancik, Angjoo Kanazawa
- Abstract summary: We show that improved sampling is generally applicable across NeRF variants under an unified concept of transmittance estimator.
We develop NerfAcc, a Python toolbox that provides flexible APIs for incorporating advanced sampling methods into NeRF related methods.
- Score: 44.372357157357875
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Optimizing and rendering Neural Radiance Fields is computationally expensive
due to the vast number of samples required by volume rendering. Recent works
have included alternative sampling approaches to help accelerate their methods,
however, they are often not the focus of the work. In this paper, we
investigate and compare multiple sampling approaches and demonstrate that
improved sampling is generally applicable across NeRF variants under an unified
concept of transmittance estimator. To facilitate future experiments, we
develop NerfAcc, a Python toolbox that provides flexible APIs for incorporating
advanced sampling methods into NeRF related methods. We demonstrate its
flexibility by showing that it can reduce the training time of several recent
NeRF methods by 1.5x to 20x with minimal modifications to the existing
codebase. Additionally, highly customized NeRFs, such as Instant-NGP, can be
implemented in native PyTorch using NerfAcc.
Related papers
- Simple and Fast Distillation of Diffusion Models [39.79747569096888]
We propose Simple and Fast Distillation (SFD) of diffusion models, which simplifies the paradigm used in existing methods.
SFD achieves 4.53 FID (NFE=2) on CIFAR-10 with only 0.64 hours of fine-tuning on a single NVIDIA A100 GPU.
arXiv Detail & Related papers (2024-09-29T12:13:06Z) - Efficient NeRF Optimization -- Not All Samples Remain Equally Hard [9.404889815088161]
We propose an application of online hard sample mining for efficient training of Neural Radiance Fields (NeRF)
NeRF models produce state-of-the-art quality for many 3D reconstruction and rendering tasks but require substantial computational resources.
arXiv Detail & Related papers (2024-08-06T13:49:01Z) - ProNeRF: Learning Efficient Projection-Aware Ray Sampling for
Fine-Grained Implicit Neural Radiance Fields [27.008124938806944]
We propose ProNeRF, which provides an optimal trade-off between memory footprint (similar to NeRF), speed (faster than HyperReel), and quality (better than K-Planes)
Our ProNeRF yields state-of-the-art metrics, being 15-23x faster with 0.65dB higher PSNR than NeRF and yielding 0.95dB higher PSNR than the best published sampler-based method, HyperReel.
arXiv Detail & Related papers (2023-12-13T13:37:32Z) - Analyzing the Internals of Neural Radiance Fields [4.681790910494339]
We analyze large, trained ReLU-MLPs used in coarse-to-fine sampling.
We show how these large minima activations can be accelerated by transforming intermediate activations to a weight estimate.
arXiv Detail & Related papers (2023-06-01T14:06:48Z) - NeuSample: Neural Sample Field for Efficient View Synthesis [129.10351459066501]
We propose a lightweight module which names a neural sample field.
The proposed sample field maps rays into sample distributions, which can be transformed into point coordinates and fed into radiance fields for volume rendering.
We show that NeuSample achieves better rendering quality than NeRF while enjoying a faster inference speed.
arXiv Detail & Related papers (2021-11-30T16:43:49Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z) - Multi-Scale Positive Sample Refinement for Few-Shot Object Detection [61.60255654558682]
Few-shot object detection (FSOD) helps detectors adapt to unseen classes with few training instances.
We propose a Multi-scale Positive Sample Refinement (MPSR) approach to enrich object scales in FSOD.
MPSR generates multi-scale positive samples as object pyramids and refines the prediction at various scales.
arXiv Detail & Related papers (2020-07-18T09:48:29Z) - Top-k Training of GANs: Improving GAN Performance by Throwing Away Bad
Samples [67.11669996924671]
We introduce a simple (one line of code) modification to the Generative Adversarial Network (GAN) training algorithm.
When updating the generator parameters, we zero out the gradient contributions from the elements of the batch that the critic scores as least realistic'
We show that this top-k update' procedure is a generally applicable improvement.
arXiv Detail & Related papers (2020-02-14T19:27:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.