SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single
Image
- URL: http://arxiv.org/abs/2204.00928v1
- Date: Sat, 2 Apr 2022 19:32:42 GMT
- Title: SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single
Image
- Authors: Dejia Xu, Yifan Jiang, Peihao Wang, Zhiwen Fan, Humphrey Shi,
Zhangyang Wang
- Abstract summary: We present a Single View NeRF (SinNeRF) framework consisting of thoughtfully designed semantic and geometry regularizations.
SinNeRF constructs a semi-supervised learning process, where we introduce and propagate geometry pseudo labels.
Experiments are conducted on complex scene benchmarks, including NeRF synthetic dataset, Local Light Field Fusion dataset, and DTU dataset.
- Score: 85.43496313628943
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the rapid development of Neural Radiance Field (NeRF), the necessity
of dense covers largely prohibits its wider applications. While several recent
works have attempted to address this issue, they either operate with sparse
views (yet still, a few of them) or on simple objects/scenes. In this work, we
consider a more ambitious task: training neural radiance field, over
realistically complex visual scenes, by "looking only once", i.e., using only a
single view. To attain this goal, we present a Single View NeRF (SinNeRF)
framework consisting of thoughtfully designed semantic and geometry
regularizations. Specifically, SinNeRF constructs a semi-supervised learning
process, where we introduce and propagate geometry pseudo labels and semantic
pseudo labels to guide the progressive training process. Extensive experiments
are conducted on complex scene benchmarks, including NeRF synthetic dataset,
Local Light Field Fusion dataset, and DTU dataset. We show that even without
pre-training on multi-view datasets, SinNeRF can yield photo-realistic
novel-view synthesis results. Under the single image setting, SinNeRF
significantly outperforms the current state-of-the-art NeRF baselines in all
cases. Project page: https://vita-group.github.io/SinNeRF/
Related papers
- TrackNeRF: Bundle Adjusting NeRF from Sparse and Noisy Views via Feature Tracks [57.73997367306271]
TrackNeRF sets a new benchmark in noisy and sparse view reconstruction.
TrackNeRF shows significant improvements over the state-of-the-art BARF and SPARF.
arXiv Detail & Related papers (2024-08-20T11:14:23Z) - ZeroRF: Fast Sparse View 360{\deg} Reconstruction with Zero Pretraining [28.03297623406931]
Current breakthroughs like Neural Radiance Fields (NeRF) have demonstrated high-fidelity image synthesis but struggle with sparse input views.
We propose ZeroRF, whose key idea is to integrate a tailored Deep Image Prior into a factorized NeRF representation.
Unlike traditional methods, ZeroRF parametrizes feature grids with a neural network generator, enabling efficient sparse view 360deg reconstruction.
arXiv Detail & Related papers (2023-12-14T18:59:32Z) - SimpleNeRF: Regularizing Sparse Input Neural Radiance Fields with
Simpler Solutions [6.9980855647933655]
supervising the depth estimated by the NeRF helps train it effectively with fewer views.
We design augmented models that encourage simpler solutions by exploring the role of positional encoding and view-dependent radiance.
We achieve state-of-the-art view-synthesis performance on two popular datasets by employing the above regularizations.
arXiv Detail & Related papers (2023-09-07T18:02:57Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - R2L: Distilling Neural Radiance Field to Neural Light Field for
Efficient Novel View Synthesis [76.07010495581535]
Rendering a single pixel requires querying the Neural Radiance Field network hundreds of times.
NeLF presents a more straightforward representation over NeRF in novel view.
We show the key to successfully learning a deep NeLF network is to have sufficient data.
arXiv Detail & Related papers (2022-03-31T17:57:05Z) - RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from
Sparse Inputs [79.00855490550367]
We show that NeRF can produce photorealistic renderings of unseen viewpoints when many input views are available.
We address this by regularizing the geometry and appearance of patches rendered from unobserved viewpoints.
Our model outperforms not only other methods that optimize over a single scene, but also conditional models that are extensively pre-trained on large multi-view datasets.
arXiv Detail & Related papers (2021-12-01T18:59:46Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z) - pixelNeRF: Neural Radiance Fields from One or Few Images [20.607712035278315]
pixelNeRF is a learning framework that predicts a continuous neural scene representation conditioned on one or few input images.
We conduct experiments on ShapeNet benchmarks for single image novel view synthesis tasks with held-out objects.
In all cases, pixelNeRF outperforms current state-of-the-art baselines for novel view synthesis and single image 3D reconstruction.
arXiv Detail & Related papers (2020-12-03T18:59:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.