SimpleNeRF: Regularizing Sparse Input Neural Radiance Fields with
Simpler Solutions
- URL: http://arxiv.org/abs/2309.03955v2
- Date: Thu, 14 Sep 2023 02:32:48 GMT
- Title: SimpleNeRF: Regularizing Sparse Input Neural Radiance Fields with
Simpler Solutions
- Authors: Nagabhushan Somraj, Adithyan Karanayil, Rajiv Soundararajan
- Abstract summary: supervising the depth estimated by the NeRF helps train it effectively with fewer views.
We design augmented models that encourage simpler solutions by exploring the role of positional encoding and view-dependent radiance.
We achieve state-of-the-art view-synthesis performance on two popular datasets by employing the above regularizations.
- Score: 6.9980855647933655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Fields (NeRF) show impressive performance for the
photorealistic free-view rendering of scenes. However, NeRFs require dense
sampling of images in the given scene, and their performance degrades
significantly when only a sparse set of views are available. Researchers have
found that supervising the depth estimated by the NeRF helps train it
effectively with fewer views. The depth supervision is obtained either using
classical approaches or neural networks pre-trained on a large dataset. While
the former may provide only sparse supervision, the latter may suffer from
generalization issues. As opposed to the earlier approaches, we seek to learn
the depth supervision by designing augmented models and training them along
with the NeRF. We design augmented models that encourage simpler solutions by
exploring the role of positional encoding and view-dependent radiance in
training the few-shot NeRF. The depth estimated by these simpler models is used
to supervise the NeRF depth estimates. Since the augmented models can be
inaccurate in certain regions, we design a mechanism to choose only reliable
depth estimates for supervision. Finally, we add a consistency loss between the
coarse and fine multi-layer perceptrons of the NeRF to ensure better
utilization of hierarchical sampling. We achieve state-of-the-art
view-synthesis performance on two popular datasets by employing the above
regularizations. The source code for our model can be found on our project
page: https://nagabhushansn95.github.io/publications/2023/SimpleNeRF.html
Related papers
- NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - Simple-RF: Regularizing Sparse Input Radiance Fields with Simpler Solutions [5.699788926464751]
Neural Radiance Fields (NeRF) show impressive performance in photo-realistic free-view rendering of scenes.
Recent improvements on the NeRF such as TensoRF and ZipNeRF employ explicit models for faster optimization and rendering.
We show that supervising the depth estimated by a radiance field helps train it effectively with fewer views.
arXiv Detail & Related papers (2024-04-29T18:00:25Z) - ViP-NeRF: Visibility Prior for Sparse Input Neural Radiance Fields [9.67057831710618]
Training neural radiance fields (NeRFs) on sparse input views leads to overfitting and incorrect scene depth estimation.
We reformulate the NeRF to also directly output the visibility of a 3D point from a given viewpoint to reduce the training time with the visibility constraint.
Our model outperforms the competing sparse input NeRF models including those that use learned priors.
arXiv Detail & Related papers (2023-04-28T18:26:23Z) - Self-NeRF: A Self-Training Pipeline for Few-Shot Neural Radiance Fields [17.725937326348994]
We propose Self-NeRF, a self-evolved NeRF that iteratively refines the radiance fields with very few number of input views.
In each iteration, we label unseen views with the predicted colors or warped pixels generated by the model from the preceding iteration.
These expanded pseudo-views are afflicted by imprecision in color and warping artifacts, which degrades the performance of NeRF.
arXiv Detail & Related papers (2023-03-10T08:22:36Z) - ActiveNeRF: Learning where to See with Uncertainty Estimation [36.209200774203005]
Recently, Neural Radiance Fields (NeRF) has shown promising performances on reconstructing 3D scenes and synthesizing novel views from a sparse set of 2D images.
We present a novel learning framework, ActiveNeRF, aiming to model a 3D scene with a constrained input budget.
arXiv Detail & Related papers (2022-09-18T12:09:15Z) - SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single
Image [85.43496313628943]
We present a Single View NeRF (SinNeRF) framework consisting of thoughtfully designed semantic and geometry regularizations.
SinNeRF constructs a semi-supervised learning process, where we introduce and propagate geometry pseudo labels.
Experiments are conducted on complex scene benchmarks, including NeRF synthetic dataset, Local Light Field Fusion dataset, and DTU dataset.
arXiv Detail & Related papers (2022-04-02T19:32:42Z) - R2L: Distilling Neural Radiance Field to Neural Light Field for
Efficient Novel View Synthesis [76.07010495581535]
Rendering a single pixel requires querying the Neural Radiance Field network hundreds of times.
NeLF presents a more straightforward representation over NeRF in novel view.
We show the key to successfully learning a deep NeLF network is to have sufficient data.
arXiv Detail & Related papers (2022-03-31T17:57:05Z) - RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from
Sparse Inputs [79.00855490550367]
We show that NeRF can produce photorealistic renderings of unseen viewpoints when many input views are available.
We address this by regularizing the geometry and appearance of patches rendered from unobserved viewpoints.
Our model outperforms not only other methods that optimize over a single scene, but also conditional models that are extensively pre-trained on large multi-view datasets.
arXiv Detail & Related papers (2021-12-01T18:59:46Z) - Depth-supervised NeRF: Fewer Views and Faster Training for Free [69.34556647743285]
DS-NeRF (Depth-supervised Neural Radiance Fields) is a loss for learning fields that takes advantage of readily-available depth supervision.
We show that our loss is compatible with other recently proposed NeRF methods, demonstrating that depth is a cheap and easily digestible supervisory signal.
arXiv Detail & Related papers (2021-07-06T17:58:35Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.