FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency
Regularization
- URL: http://arxiv.org/abs/2303.07418v1
- Date: Mon, 13 Mar 2023 18:59:03 GMT
- Title: FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency
Regularization
- Authors: Jiawei Yang, Marco Pavone, Yue Wang
- Abstract summary: We present Frequency regularized NeRF (FreeNeRF), a surprisingly simple baseline that outperforms previous methods.
We analyze the key challenges in few-shot neural rendering and find that frequency plays an important role in NeRF's training.
- Score: 32.1581416980828
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Novel view synthesis with sparse inputs is a challenging problem for neural
radiance fields (NeRF). Recent efforts alleviate this challenge by introducing
external supervision, such as pre-trained models and extra depth signals, and
by non-trivial patch-based rendering. In this paper, we present Frequency
regularized NeRF (FreeNeRF), a surprisingly simple baseline that outperforms
previous methods with minimal modifications to the plain NeRF. We analyze the
key challenges in few-shot neural rendering and find that frequency plays an
important role in NeRF's training. Based on the analysis, we propose two
regularization terms. One is to regularize the frequency range of NeRF's
inputs, while the other is to penalize the near-camera density fields. Both
techniques are ``free lunches'' at no additional computational cost. We
demonstrate that even with one line of code change, the original NeRF can
achieve similar performance as other complicated methods in the few-shot
setting. FreeNeRF achieves state-of-the-art performance across diverse
datasets, including Blender, DTU, and LLFF. We hope this simple baseline will
motivate a rethinking of the fundamental role of frequency in NeRF's training
under the low-data regime and beyond.
Related papers
- Few-shot NeRF by Adaptive Rendering Loss Regularization [78.50710219013301]
Novel view synthesis with sparse inputs poses great challenges to Neural Radiance Field (NeRF)
Recent works demonstrate that the frequency regularization of Positional rendering can achieve promising results for few-shot NeRF.
We propose Adaptive Rendering loss regularization for few-shot NeRF, dubbed AR-NeRF.
arXiv Detail & Related papers (2024-10-23T13:05:26Z) - InsertNeRF: Instilling Generalizability into NeRF with HyperNet Modules [23.340064406356174]
Generalizing Neural Radiance Fields (NeRF) to new scenes is a significant challenge.
We introduce InsertNeRF, a method for INStilling gEneRalizabiliTy into NeRF.
arXiv Detail & Related papers (2023-08-26T14:50:24Z) - Efficient View Synthesis with Neural Radiance Distribution Field [61.22920276806721]
We propose a new representation called Neural Radiance Distribution Field (NeRDF) that targets efficient view synthesis in real-time.
We use a small network similar to NeRF while preserving the rendering speed with a single network forwarding per pixel as in NeLF.
Experiments show that our proposed method offers a better trade-off among speed, quality, and network size than existing methods.
arXiv Detail & Related papers (2023-08-22T02:23:28Z) - DReg-NeRF: Deep Registration for Neural Radiance Fields [66.69049158826677]
We propose DReg-NeRF to solve the NeRF registration problem on object-centric annotated scenes without human intervention.
Our proposed method beats the SOTA point cloud registration methods by a large margin.
arXiv Detail & Related papers (2023-08-18T08:37:49Z) - Learning a Diffusion Prior for NeRFs [84.99454404653339]
We propose to use a diffusion model to generate NeRFs encoded on a regularized grid.
We show that our model can sample realistic NeRFs, while at the same time allowing conditional generations, given a certain observation as guidance.
arXiv Detail & Related papers (2023-04-27T19:24:21Z) - Self-NeRF: A Self-Training Pipeline for Few-Shot Neural Radiance Fields [17.725937326348994]
We propose Self-NeRF, a self-evolved NeRF that iteratively refines the radiance fields with very few number of input views.
In each iteration, we label unseen views with the predicted colors or warped pixels generated by the model from the preceding iteration.
These expanded pseudo-views are afflicted by imprecision in color and warping artifacts, which degrades the performance of NeRF.
arXiv Detail & Related papers (2023-03-10T08:22:36Z) - Compressing Explicit Voxel Grid Representations: fast NeRFs become also
small [3.1473798197405944]
Re:NeRF aims to reduce memory storage of NeRF models while maintaining comparable performance.
We benchmark our approach with three different EVG-NeRF architectures on four popular benchmarks.
arXiv Detail & Related papers (2022-10-23T16:42:29Z) - Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level
Physically-Grounded Augmentations [111.08941206369508]
We propose Augmented NeRF (Aug-NeRF), which for the first time brings the power of robust data augmentations into regularizing the NeRF training.
Our proposal learns to seamlessly blend worst-case perturbations into three distinct levels of the NeRF pipeline.
Aug-NeRF effectively boosts NeRF performance in both novel view synthesis and underlying geometry reconstruction.
arXiv Detail & Related papers (2022-07-04T02:27:07Z) - R2L: Distilling Neural Radiance Field to Neural Light Field for
Efficient Novel View Synthesis [76.07010495581535]
Rendering a single pixel requires querying the Neural Radiance Field network hundreds of times.
NeLF presents a more straightforward representation over NeRF in novel view.
We show the key to successfully learning a deep NeLF network is to have sufficient data.
arXiv Detail & Related papers (2022-03-31T17:57:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.