Compressing Explicit Voxel Grid Representations: fast NeRFs become also
small
- URL: http://arxiv.org/abs/2210.12782v1
- Date: Sun, 23 Oct 2022 16:42:29 GMT
- Title: Compressing Explicit Voxel Grid Representations: fast NeRFs become also
small
- Authors: Chenxi Lola Deng and Enzo Tartaglione
- Abstract summary: Re:NeRF aims to reduce memory storage of NeRF models while maintaining comparable performance.
We benchmark our approach with three different EVG-NeRF architectures on four popular benchmarks.
- Score: 3.1473798197405944
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: NeRFs have revolutionized the world of per-scene radiance field
reconstruction because of their intrinsic compactness. One of the main
limitations of NeRFs is their slow rendering speed, both at training and
inference time. Recent research focuses on the optimization of an explicit
voxel grid (EVG) that represents the scene, which can be paired with neural
networks to learn radiance fields. This approach significantly enhances the
speed both at train and inference time, but at the cost of large memory
occupation. In this work we propose Re:NeRF, an approach that specifically
targets EVG-NeRFs compressibility, aiming to reduce memory storage of NeRF
models while maintaining comparable performance. We benchmark our approach with
three different EVG-NeRF architectures on four popular benchmarks, showing
Re:NeRF's broad usability and effectiveness.
Related papers
- Few-shot NeRF by Adaptive Rendering Loss Regularization [78.50710219013301]
Novel view synthesis with sparse inputs poses great challenges to Neural Radiance Field (NeRF)
Recent works demonstrate that the frequency regularization of Positional rendering can achieve promising results for few-shot NeRF.
We propose Adaptive Rendering loss regularization for few-shot NeRF, dubbed AR-NeRF.
arXiv Detail & Related papers (2024-10-23T13:05:26Z) - Spatial Annealing Smoothing for Efficient Few-shot Neural Rendering [106.0057551634008]
We introduce an accurate and efficient few-shot neural rendering method named Spatial Annealing smoothing regularized NeRF (SANeRF)
By adding merely one line of code, SANeRF delivers superior rendering quality and much faster reconstruction speed compared to current few-shot NeRF methods.
arXiv Detail & Related papers (2024-06-12T02:48:52Z) - Efficient View Synthesis with Neural Radiance Distribution Field [61.22920276806721]
We propose a new representation called Neural Radiance Distribution Field (NeRDF) that targets efficient view synthesis in real-time.
We use a small network similar to NeRF while preserving the rendering speed with a single network forwarding per pixel as in NeLF.
Experiments show that our proposed method offers a better trade-off among speed, quality, and network size than existing methods.
arXiv Detail & Related papers (2023-08-22T02:23:28Z) - MF-NeRF: Memory Efficient NeRF with Mixed-Feature Hash Table [62.164549651134465]
We propose MF-NeRF, a memory-efficient NeRF framework that employs a Mixed-Feature hash table to improve memory efficiency and reduce training time while maintaining reconstruction quality.
Our experiments with state-of-the-art Instant-NGP, TensoRF, and DVGO, indicate our MF-NeRF could achieve the fastest training time on the same GPU hardware with similar or even higher reconstruction quality.
arXiv Detail & Related papers (2023-04-25T05:44:50Z) - FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency
Regularization [32.1581416980828]
We present Frequency regularized NeRF (FreeNeRF), a surprisingly simple baseline that outperforms previous methods.
We analyze the key challenges in few-shot neural rendering and find that frequency plays an important role in NeRF's training.
arXiv Detail & Related papers (2023-03-13T18:59:03Z) - MEIL-NeRF: Memory-Efficient Incremental Learning of Neural Radiance
Fields [49.68916478541697]
We develop a Memory-Efficient Incremental Learning algorithm for NeRF (MEIL-NeRF)
MEIL-NeRF takes inspiration from NeRF itself in that a neural network can serve as a memory that provides the pixel RGB values, given rays as queries.
As a result, MEIL-NeRF demonstrates constant memory consumption and competitive performance.
arXiv Detail & Related papers (2022-12-16T08:04:56Z) - Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level
Physically-Grounded Augmentations [111.08941206369508]
We propose Augmented NeRF (Aug-NeRF), which for the first time brings the power of robust data augmentations into regularizing the NeRF training.
Our proposal learns to seamlessly blend worst-case perturbations into three distinct levels of the NeRF pipeline.
Aug-NeRF effectively boosts NeRF performance in both novel view synthesis and underlying geometry reconstruction.
arXiv Detail & Related papers (2022-07-04T02:27:07Z) - SqueezeNeRF: Further factorized FastNeRF for memory-efficient inference [0.0]
We propose SqueezeNeRF, which is more than 60 times memory-efficient than the sparse cache of FastNeRF.
It is still able to render at more than 190 frames per second on a high spec GPU during inference.
arXiv Detail & Related papers (2022-04-06T05:19:47Z) - VaxNeRF: Revisiting the Classic for Voxel-Accelerated Neural Radiance
Field [28.087183395793236]
We propose Voxel-Accelearated NeRF (VaxNeRF) to integrate NeRF with visual hull.
VaxNeRF achieves about 2-8x faster learning on top of the highly-performative JaxNeRF.
We hope VaxNeRF can empower and accelerate new NeRF extensions and applications.
arXiv Detail & Related papers (2021-11-25T14:56:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.