NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry
Scaffolds
- URL: http://arxiv.org/abs/2304.06287v2
- Date: Tue, 23 May 2023 12:49:17 GMT
- Title: NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry
Scaffolds
- Authors: Chen Yang, Peihao Li, Zanwei Zhou, Shanxin Yuan, Bingbing Liu,
Xiaokang Yang, Weichao Qiu, Wei Shen
- Abstract summary: We present NeRFVS, a novel neural radiance fields (NeRF) based method to enable free navigation in a room.
NeRF achieves impressive performance in rendering images for novel views similar to the input views while suffering for novel views that are significantly different from the training views.
- Score: 60.1382112938132
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present NeRFVS, a novel neural radiance fields (NeRF) based method to
enable free navigation in a room. NeRF achieves impressive performance in
rendering images for novel views similar to the input views while suffering for
novel views that are significantly different from the training views. To
address this issue, we utilize the holistic priors, including pseudo depth maps
and view coverage information, from neural reconstruction to guide the learning
of implicit neural representations of 3D indoor scenes. Concretely, an
off-the-shelf neural reconstruction method is leveraged to generate a geometry
scaffold. Then, two loss functions based on the holistic priors are proposed to
improve the learning of NeRF: 1) A robust depth loss that can tolerate the
error of the pseudo depth map to guide the geometry learning of NeRF; 2) A
variance loss to regularize the variance of implicit neural representations to
reduce the geometry and color ambiguity in the learning procedure. These two
loss functions are modulated during NeRF optimization according to the view
coverage information to reduce the negative influence brought by the view
coverage imbalance. Extensive results demonstrate that our NeRFVS outperforms
state-of-the-art view synthesis methods quantitatively and qualitatively on
indoor scenes, achieving high-fidelity free navigation results.
Related papers
- Re-Nerfing: Improving Novel View Synthesis through Novel View Synthesis [80.3686833921072]
Recent neural rendering and reconstruction techniques, such as NeRFs or Gaussian Splatting, have shown remarkable novel view synthesis capabilities.
With fewer images available, these methods start to fail since they can no longer correctly triangulate the underlying 3D geometry.
We propose Re-Nerfing, a simple and general add-on approach that leverages novel view synthesis itself to tackle this problem.
arXiv Detail & Related papers (2023-12-04T18:56:08Z) - Improving Neural Radiance Fields with Depth-aware Optimization for Novel
View Synthesis [12.3338393483795]
We propose SfMNeRF, a method to better synthesize novel views as well as reconstruct the 3D-scene geometry.
SfMNeRF employs the epipolar, photometric consistency, depth smoothness, and position-of-matches constraints to explicitly reconstruct the 3D-scene structure.
Experiments on two public datasets demonstrate that SfMNeRF surpasses state-of-the-art approaches.
arXiv Detail & Related papers (2023-04-11T13:37:17Z) - Clean-NeRF: Reformulating NeRF to account for View-Dependent
Observations [67.54358911994967]
This paper proposes Clean-NeRF for accurate 3D reconstruction and novel view rendering in complex scenes.
Clean-NeRF can be implemented as a plug-in that can immediately benefit existing NeRF-based methods without additional input.
arXiv Detail & Related papers (2023-03-26T12:24:31Z) - SCADE: NeRFs from Space Carving with Ambiguity-Aware Depth Estimates [16.344734292989504]
SCADE is a novel technique that improves NeRF reconstruction quality on sparse, unconstrained input views.
We propose a new method that learns to predict, for each view, a continuous, multimodal distribution of depth estimates.
Experiments show that our approach enables higher fidelity novel view synthesis from sparse views.
arXiv Detail & Related papers (2023-03-23T18:00:07Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level
Physically-Grounded Augmentations [111.08941206369508]
We propose Augmented NeRF (Aug-NeRF), which for the first time brings the power of robust data augmentations into regularizing the NeRF training.
Our proposal learns to seamlessly blend worst-case perturbations into three distinct levels of the NeRF pipeline.
Aug-NeRF effectively boosts NeRF performance in both novel view synthesis and underlying geometry reconstruction.
arXiv Detail & Related papers (2022-07-04T02:27:07Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.