FastSR-NeRF: Improving NeRF Efficiency on Consumer Devices with A Simple
Super-Resolution Pipeline
- URL: http://arxiv.org/abs/2312.11537v2
- Date: Wed, 20 Dec 2023 23:17:49 GMT
- Title: FastSR-NeRF: Improving NeRF Efficiency on Consumer Devices with A Simple
Super-Resolution Pipeline
- Authors: Chien-Yu Lin, Qichen Fu, Thomas Merth, Karren Yang, Anurag Ranjan
- Abstract summary: Super-resolution (SR) techniques have been proposed to upscale the outputs of neural radiance fields (NeRF)
In this paper, we aim to leverage SR for efficiency gains without costly training or architectural changes.
- Score: 10.252591107152503
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Super-resolution (SR) techniques have recently been proposed to upscale the
outputs of neural radiance fields (NeRF) and generate high-quality images with
enhanced inference speeds. However, existing NeRF+SR methods increase training
overhead by using extra input features, loss functions, and/or expensive
training procedures such as knowledge distillation. In this paper, we aim to
leverage SR for efficiency gains without costly training or architectural
changes. Specifically, we build a simple NeRF+SR pipeline that directly
combines existing modules, and we propose a lightweight augmentation technique,
random patch sampling, for training. Compared to existing NeRF+SR methods, our
pipeline mitigates the SR computing overhead and can be trained up to 23x
faster, making it feasible to run on consumer devices such as the Apple
MacBook. Experiments show our pipeline can upscale NeRF outputs by 2-4x while
maintaining high quality, increasing inference speeds by up to 18x on an NVIDIA
V100 GPU and 12.8x on an M1 Pro chip. We conclude that SR can be a simple but
effective technique for improving the efficiency of NeRF models for consumer
devices.
Related papers
- NeRF-XL: Scaling NeRFs with Multiple GPUs [72.75214892939411]
We present NeRF-XL, a principled method for distributing Neural Radiance Fields (NeRFs) across multiple GPU.
We show improvements in reconstruction quality with larger parameter counts and speed improvements with more GPU.
We demonstrate the effectiveness of NeRF-XL on a wide variety of datasets, including the largest open-source dataset to date, MatrixCity, containing 258K images covering a 25km2 city area.
arXiv Detail & Related papers (2024-04-24T21:43:15Z) - Prompt2NeRF-PIL: Fast NeRF Generation via Pretrained Implicit Latent [61.56387277538849]
This paper explores promptable NeRF generation for direct conditioning and fast generation of NeRF parameters for the underlying 3D scenes.
Prompt2NeRF-PIL is capable of generating a variety of 3D objects with a single forward pass.
We will show that our approach speeds up the text-to-NeRF model DreamFusion and the 3D reconstruction speed of the image-to-NeRF method Zero-1-to-3 by 3 to 5 times.
arXiv Detail & Related papers (2023-12-05T08:32:46Z) - Efficient View Synthesis with Neural Radiance Distribution Field [61.22920276806721]
We propose a new representation called Neural Radiance Distribution Field (NeRDF) that targets efficient view synthesis in real-time.
We use a small network similar to NeRF while preserving the rendering speed with a single network forwarding per pixel as in NeLF.
Experiments show that our proposed method offers a better trade-off among speed, quality, and network size than existing methods.
arXiv Detail & Related papers (2023-08-22T02:23:28Z) - From NeRFLiX to NeRFLiX++: A General NeRF-Agnostic Restorer Paradigm [57.73868344064043]
We propose NeRFLiX, a general NeRF-agnostic restorer paradigm that learns a degradation-driven inter-viewpoint mixer.
We also present NeRFLiX++ with a stronger two-stage NeRF degradation simulator and a faster inter-viewpoint mixer.
NeRFLiX++ is capable of restoring photo-realistic ultra-high-resolution outputs from noisy low-resolution NeRF-rendered views.
arXiv Detail & Related papers (2023-06-10T09:19:19Z) - Re-ReND: Real-time Rendering of NeRFs across Devices [56.081995086924216]
Re-ReND is designed to achieve real-time performance by converting the NeRF into a representation that can be efficiently processed by standard graphics pipelines.
We find that Re-ReND can achieve over a 2.6-fold increase in rendering speed versus the state-of-the-art without perceptible losses in quality.
arXiv Detail & Related papers (2023-03-15T15:59:41Z) - FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency
Regularization [32.1581416980828]
We present Frequency regularized NeRF (FreeNeRF), a surprisingly simple baseline that outperforms previous methods.
We analyze the key challenges in few-shot neural rendering and find that frequency plays an important role in NeRF's training.
arXiv Detail & Related papers (2023-03-13T18:59:03Z) - Compressing Explicit Voxel Grid Representations: fast NeRFs become also
small [3.1473798197405944]
Re:NeRF aims to reduce memory storage of NeRF models while maintaining comparable performance.
We benchmark our approach with three different EVG-NeRF architectures on four popular benchmarks.
arXiv Detail & Related papers (2022-10-23T16:42:29Z) - NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling [82.99453001445478]
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs.
Our method is built upon Neural Radiance Fields (NeRF) that predicts per-point density and color with a multi-layer perceptron.
arXiv Detail & Related papers (2021-12-03T07:33:47Z) - VaxNeRF: Revisiting the Classic for Voxel-Accelerated Neural Radiance
Field [28.087183395793236]
We propose Voxel-Accelearated NeRF (VaxNeRF) to integrate NeRF with visual hull.
VaxNeRF achieves about 2-8x faster learning on top of the highly-performative JaxNeRF.
We hope VaxNeRF can empower and accelerate new NeRF extensions and applications.
arXiv Detail & Related papers (2021-11-25T14:56:53Z) - Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance
Fields [45.84983186882732]
"mip-NeRF" (a la "mipmap"), extends NeRF to represent the scene at a continuously-valued scale.
By efficiently rendering anti-aliased conical frustums instead of rays, mip-NeRF reduces objectionable aliasing artifacts.
Compared to NeRF, mip-NeRF reduces average error rates by 16% on the dataset presented with NeRF and by 60% on a challenging multiscale variant of that dataset.
arXiv Detail & Related papers (2021-03-24T18:02:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.