SqueezeNeRF: Further factorized FastNeRF for memory-efficient inference
- URL: http://arxiv.org/abs/2204.02585v2
- Date: Thu, 7 Apr 2022 02:02:46 GMT
- Title: SqueezeNeRF: Further factorized FastNeRF for memory-efficient inference
- Authors: Krishna Wadhwani, Tamaki Kojima
- Abstract summary: We propose SqueezeNeRF, which is more than 60 times memory-efficient than the sparse cache of FastNeRF.
It is still able to render at more than 190 frames per second on a high spec GPU during inference.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Radiance Fields (NeRF) has emerged as the state-of-the-art method for
novel view generation of complex scenes, but is very slow during inference.
Recently, there have been multiple works on speeding up NeRF inference, but the
state of the art methods for real-time NeRF inference rely on caching the
neural network output, which occupies several giga-bytes of disk space that
limits their real-world applicability. As caching the neural network of
original NeRF network is not feasible, Garbin et al. proposed "FastNeRF" which
factorizes the problem into 2 sub-networks - one which depends only on the 3D
coordinate of a sample point and one which depends only on the 2D camera
viewing direction. Although this factorization enables them to reduce the cache
size and perform inference at over 200 frames per second, the memory overhead
is still substantial. In this work, we propose SqueezeNeRF, which is more than
60 times memory-efficient than the sparse cache of FastNeRF and is still able
to render at more than 190 frames per second on a high spec GPU during
inference.
Related papers
- Efficient View Synthesis with Neural Radiance Distribution Field [61.22920276806721]
We propose a new representation called Neural Radiance Distribution Field (NeRDF) that targets efficient view synthesis in real-time.
We use a small network similar to NeRF while preserving the rendering speed with a single network forwarding per pixel as in NeLF.
Experiments show that our proposed method offers a better trade-off among speed, quality, and network size than existing methods.
arXiv Detail & Related papers (2023-08-22T02:23:28Z) - MEIL-NeRF: Memory-Efficient Incremental Learning of Neural Radiance
Fields [49.68916478541697]
We develop a Memory-Efficient Incremental Learning algorithm for NeRF (MEIL-NeRF)
MEIL-NeRF takes inspiration from NeRF itself in that a neural network can serve as a memory that provides the pixel RGB values, given rays as queries.
As a result, MEIL-NeRF demonstrates constant memory consumption and competitive performance.
arXiv Detail & Related papers (2022-12-16T08:04:56Z) - Real-Time Neural Light Field on Mobile Devices [54.44982318758239]
We introduce a novel network architecture that runs efficiently on mobile devices with low latency and small size.
Our model achieves high-resolution generation while maintaining real-time inference for both synthetic and real-world scenes.
arXiv Detail & Related papers (2022-12-15T18:58:56Z) - Compressing Explicit Voxel Grid Representations: fast NeRFs become also
small [3.1473798197405944]
Re:NeRF aims to reduce memory storage of NeRF models while maintaining comparable performance.
We benchmark our approach with three different EVG-NeRF architectures on four popular benchmarks.
arXiv Detail & Related papers (2022-10-23T16:42:29Z) - Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level
Physically-Grounded Augmentations [111.08941206369508]
We propose Augmented NeRF (Aug-NeRF), which for the first time brings the power of robust data augmentations into regularizing the NeRF training.
Our proposal learns to seamlessly blend worst-case perturbations into three distinct levels of the NeRF pipeline.
Aug-NeRF effectively boosts NeRF performance in both novel view synthesis and underlying geometry reconstruction.
arXiv Detail & Related papers (2022-07-04T02:27:07Z) - R2L: Distilling Neural Radiance Field to Neural Light Field for
Efficient Novel View Synthesis [76.07010495581535]
Rendering a single pixel requires querying the Neural Radiance Field network hundreds of times.
NeLF presents a more straightforward representation over NeRF in novel view.
We show the key to successfully learning a deep NeLF network is to have sufficient data.
arXiv Detail & Related papers (2022-03-31T17:57:05Z) - Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual
Fly-Throughs [54.41204057689033]
We explore how to leverage neural fields (NeRFs) to build interactive 3D environments from large-scale visual captures spanning buildings or even multiple city blocks collected primarily from drone data.
In contrast to the single object scenes against which NeRFs have been traditionally evaluated, this setting poses multiple challenges.
We introduce a simple clustering algorithm that partitions training images (or rather pixels) into different NeRF submodules that can be trained in parallel.
arXiv Detail & Related papers (2021-12-20T17:40:48Z) - Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance
Fields [45.84983186882732]
"mip-NeRF" (a la "mipmap"), extends NeRF to represent the scene at a continuously-valued scale.
By efficiently rendering anti-aliased conical frustums instead of rays, mip-NeRF reduces objectionable aliasing artifacts.
Compared to NeRF, mip-NeRF reduces average error rates by 16% on the dataset presented with NeRF and by 60% on a challenging multiscale variant of that dataset.
arXiv Detail & Related papers (2021-03-24T18:02:11Z) - FastNeRF: High-Fidelity Neural Rendering at 200FPS [17.722927021159393]
We propose FastNeRF, a system capable of rendering high fidelity images at 200Hz on a high-end consumer GPU.
The proposed method is 3000 times faster than the original NeRF algorithm and at least an order of magnitude faster than existing work on accelerating NeRF.
arXiv Detail & Related papers (2021-03-18T17:09:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.