FastNeRF: High-Fidelity Neural Rendering at 200FPS
- URL: http://arxiv.org/abs/2103.10380v1
- Date: Thu, 18 Mar 2021 17:09:12 GMT
- Title: FastNeRF: High-Fidelity Neural Rendering at 200FPS
- Authors: Stephan J. Garbin, Marek Kowalski, Matthew Johnson, Jamie Shotton,
Julien Valentin
- Abstract summary: We propose FastNeRF, a system capable of rendering high fidelity images at 200Hz on a high-end consumer GPU.
The proposed method is 3000 times faster than the original NeRF algorithm and at least an order of magnitude faster than existing work on accelerating NeRF.
- Score: 17.722927021159393
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work on Neural Radiance Fields (NeRF) showed how neural networks can
be used to encode complex 3D environments that can be rendered
photorealistically from novel viewpoints. Rendering these images is very
computationally demanding and recent improvements are still a long way from
enabling interactive rates, even on high-end hardware. Motivated by scenarios
on mobile and mixed reality devices, we propose FastNeRF, the first NeRF-based
system capable of rendering high fidelity photorealistic images at 200Hz on a
high-end consumer GPU. The core of our method is a graphics-inspired
factorization that allows for (i) compactly caching a deep radiance map at each
position in space, (ii) efficiently querying that map using ray directions to
estimate the pixel values in the rendered image. Extensive experiments show
that the proposed method is 3000 times faster than the original NeRF algorithm
and at least an order of magnitude faster than existing work on accelerating
NeRF, while maintaining visual quality and extensibility.
Related papers
- MixRT: Mixed Neural Representations For Real-Time NeRF Rendering [24.040636076067393]
We propose MixRT, a novel NeRF representation that includes a low-quality mesh, a view-dependent displacement map, and a compressed NeRF model.
This design effectively harnesses the capabilities of existing graphics hardware, thus enabling real-time NeRF rendering on edge devices.
arXiv Detail & Related papers (2023-12-19T04:14:11Z) - PyNeRF: Pyramidal Neural Radiance Fields [51.25406129834537]
We propose a simple modification to grid-based models by training model heads at different spatial grid resolutions.
At render time, we simply use coarser grids to render samples that cover larger volumes.
Compared to Mip-NeRF, we reduce error rates by 20% while training over 60x faster.
arXiv Detail & Related papers (2023-11-30T23:52:46Z) - Hyb-NeRF: A Multiresolution Hybrid Encoding for Neural Radiance Fields [12.335934855851486]
We present Hyb-NeRF, a novel neural radiance field with a multi-resolution hybrid encoding.
We show that Hyb-NeRF achieves faster rendering speed with better rending quality and even a lower memory footprint in comparison to previous methods.
arXiv Detail & Related papers (2023-11-21T10:01:08Z) - Reconstructive Latent-Space Neural Radiance Fields for Efficient 3D
Scene Representations [34.836151514152746]
In this work, we investigate combining an autoencoder with a NeRF, in which latent features are rendered and then convolutionally decoded.
The resulting latent-space NeRF can produce novel views with higher quality than standard colour-space NeRFs.
We can control the tradeoff between efficiency and image quality by shrinking the AE architecture, achieving over 13 times faster rendering with only a small drop in performance.
arXiv Detail & Related papers (2023-10-27T03:52:08Z) - Real-Time Neural Light Field on Mobile Devices [54.44982318758239]
We introduce a novel network architecture that runs efficiently on mobile devices with low latency and small size.
Our model achieves high-resolution generation while maintaining real-time inference for both synthetic and real-world scenes.
arXiv Detail & Related papers (2022-12-15T18:58:56Z) - EfficientNeRF: Efficient Neural Radiance Fields [63.76830521051605]
We present EfficientNeRF as an efficient NeRF-based method to represent 3D scene and synthesize novel-view images.
Our method can reduce over 88% of training time, reach rendering speed of over 200 FPS, while still achieving competitive accuracy.
arXiv Detail & Related papers (2022-06-02T05:36:44Z) - NeRFusion: Fusing Radiance Fields for Large-Scale Scene Reconstruction [50.54946139497575]
We propose NeRFusion, a method that combines the advantages of NeRF and TSDF-based fusion techniques to achieve efficient large-scale reconstruction and photo-realistic rendering.
We demonstrate that NeRFusion achieves state-of-the-art quality on both large-scale indoor and small-scale object scenes, with substantially faster reconstruction than NeRF and other recent methods.
arXiv Detail & Related papers (2022-03-21T18:56:35Z) - Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual
Fly-Throughs [54.41204057689033]
We explore how to leverage neural fields (NeRFs) to build interactive 3D environments from large-scale visual captures spanning buildings or even multiple city blocks collected primarily from drone data.
In contrast to the single object scenes against which NeRFs have been traditionally evaluated, this setting poses multiple challenges.
We introduce a simple clustering algorithm that partitions training images (or rather pixels) into different NeRF submodules that can be trained in parallel.
arXiv Detail & Related papers (2021-12-20T17:40:48Z) - NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling [82.99453001445478]
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs.
Our method is built upon Neural Radiance Fields (NeRF) that predicts per-point density and color with a multi-layer perceptron.
arXiv Detail & Related papers (2021-12-03T07:33:47Z) - Baking Neural Radiance Fields for Real-Time View Synthesis [41.07052395570522]
We present a method to train a NeRF, then precompute and store (i.e. "bake") it as a novel representation called a Sparse Neural Radiance Grid (SNeRG)
The resulting scene representation retains NeRF's ability to render fine geometric details and view-dependent appearance, is compact, and can be rendered in real-time.
arXiv Detail & Related papers (2021-03-26T17:59:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.