CAwa-NeRF: Instant Learning of Compression-Aware NeRF Features
- URL: http://arxiv.org/abs/2310.14695v1
- Date: Mon, 23 Oct 2023 08:40:44 GMT
- Title: CAwa-NeRF: Instant Learning of Compression-Aware NeRF Features
- Authors: Omnia Mahmoud, Th\'eo Ladune, Matthieu Gendrin
- Abstract summary: In this paper, we introduce instant learning of compression-aware NeRF features (CAwa-NeRF)
Our proposed instant learning pipeline can achieve impressive results on different kinds of static scenes.
In particular, for single object masked background scenes CAwa-NeRF compresses the feature grids down to 6% (1.2 MB) of the original size without any loss in the PSNR (33 dB) or down to 2.4% (0.53 MB) with a slight virtual loss (32.31 dB)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling 3D scenes by volumetric feature grids is one of the promising
directions of neural approximations to improve Neural Radiance Fields (NeRF).
Instant-NGP (INGP) introduced multi-resolution hash encoding from a lookup
table of trainable feature grids which enabled learning high-quality neural
graphics primitives in a matter of seconds. However, this improvement came at
the cost of higher storage size. In this paper, we address this challenge by
introducing instant learning of compression-aware NeRF features (CAwa-NeRF),
that allows exporting the zip compressed feature grids at the end of the model
training with a negligible extra time overhead without changing neither the
storage architecture nor the parameters used in the original INGP paper.
Nonetheless, the proposed method is not limited to INGP but could also be
adapted to any model. By means of extensive simulations, our proposed instant
learning pipeline can achieve impressive results on different kinds of static
scenes such as single object masked background scenes and real-life scenes
captured in our studio. In particular, for single object masked background
scenes CAwa-NeRF compresses the feature grids down to 6% (1.2 MB) of the
original size without any loss in the PSNR (33 dB) or down to 2.4% (0.53 MB)
with a slight virtual loss (32.31 dB).
Related papers
- SCARF: Scalable Continual Learning Framework for Memory-efficient Multiple Neural Radiance Fields [9.606992888590757]
We build on Neural Radiance Fields (NeRF), which uses multi-layer perceptron to model the density and radiance field of a scene as the implicit function.
We propose an uncertain surface knowledge distillation strategy to transfer the radiance field knowledge of previous scenes to the new model.
Experiments show that the proposed approach achieves state-of-the-art rendering quality of continual learning NeRF on NeRF-Synthetic, LLFF, and TanksAndTemples datasets.
arXiv Detail & Related papers (2024-09-06T03:36:12Z) - Neural NeRF Compression [19.853882143024]
Recent NeRFs utilize feature grids to improve rendering quality and speed.
These representations introduce significant storage overhead.
This paper presents a novel method for efficiently compressing a grid-based NeRF model.
arXiv Detail & Related papers (2024-06-13T09:12:26Z) - N-BVH: Neural ray queries with bounding volume hierarchies [51.430495562430565]
In 3D computer graphics, the bulk of a scene's memory usage is due to polygons and textures.
We devise N-BVH, a neural compression architecture designed to answer arbitrary ray queries in 3D.
Our method provides faithful approximations of visibility, depth, and appearance attributes.
arXiv Detail & Related papers (2024-05-25T13:54:34Z) - NeRFCodec: Neural Feature Compression Meets Neural Radiance Fields for Memory-Efficient Scene Representation [22.151167286623416]
We propose an end-to-end NeRF compression framework that integrates non-linear transform, quantization, and entropy coding for memory-efficient scene representation.
We demonstrate our method outperforms existing NeRF compression methods, enabling high-quality novel view synthesis with a memory budget of 0.5 MB.
arXiv Detail & Related papers (2024-04-02T15:49:00Z) - PyNeRF: Pyramidal Neural Radiance Fields [51.25406129834537]
We propose a simple modification to grid-based models by training model heads at different spatial grid resolutions.
At render time, we simply use coarser grids to render samples that cover larger volumes.
Compared to Mip-NeRF, we reduce error rates by 20% while training over 60x faster.
arXiv Detail & Related papers (2023-11-30T23:52:46Z) - HollowNeRF: Pruning Hashgrid-Based NeRFs with Trainable Collision
Mitigation [6.335245465042035]
We propose a novel compression solution for hashgrid-based Neural radiance fields (NeRF)
HollowNeRF automatically sparsifies the feature grid during the training phase.
Our method delivers comparable rendering quality to Instant-NGP, while utilizing just 31% of the parameters.
arXiv Detail & Related papers (2023-08-19T22:28:17Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level
Physically-Grounded Augmentations [111.08941206369508]
We propose Augmented NeRF (Aug-NeRF), which for the first time brings the power of robust data augmentations into regularizing the NeRF training.
Our proposal learns to seamlessly blend worst-case perturbations into three distinct levels of the NeRF pipeline.
Aug-NeRF effectively boosts NeRF performance in both novel view synthesis and underlying geometry reconstruction.
arXiv Detail & Related papers (2022-07-04T02:27:07Z) - NeRFusion: Fusing Radiance Fields for Large-Scale Scene Reconstruction [50.54946139497575]
We propose NeRFusion, a method that combines the advantages of NeRF and TSDF-based fusion techniques to achieve efficient large-scale reconstruction and photo-realistic rendering.
We demonstrate that NeRFusion achieves state-of-the-art quality on both large-scale indoor and small-scale object scenes, with substantially faster reconstruction than NeRF and other recent methods.
arXiv Detail & Related papers (2022-03-21T18:56:35Z) - Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual
Fly-Throughs [54.41204057689033]
We explore how to leverage neural fields (NeRFs) to build interactive 3D environments from large-scale visual captures spanning buildings or even multiple city blocks collected primarily from drone data.
In contrast to the single object scenes against which NeRFs have been traditionally evaluated, this setting poses multiple challenges.
We introduce a simple clustering algorithm that partitions training images (or rather pixels) into different NeRF submodules that can be trained in parallel.
arXiv Detail & Related papers (2021-12-20T17:40:48Z) - DeepCompress: Efficient Point Cloud Geometry Compression [1.808877001896346]
We propose a more efficient deep learning-based encoder architecture for point clouds compression.
We show that incorporating the learned activation function from Efficient Neural Image Compression (CENIC) yields dramatic gains in efficiency and performance.
Our proposed modifications outperform the baseline approaches by a small margin in terms of Bjontegard delta rate and PSNR values.
arXiv Detail & Related papers (2021-06-02T23:18:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.