Single-view Neural Radiance Fields with Depth Teacher
- URL: http://arxiv.org/abs/2303.09952v2
- Date: Thu, 11 May 2023 12:35:25 GMT
- Title: Single-view Neural Radiance Fields with Depth Teacher
- Authors: Yurui Chen, Chun Gu, Feihu Zhang, Li Zhang
- Abstract summary: We develop a new NeRF model for novel view synthesis using only a single image as input.
We propose to combine the (coarse) planar rendering and the (fine) volume rendering to achieve higher rendering quality and better generalizations.
- Score: 10.207824869802314
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Fields (NeRF) have been proposed for photorealistic novel
view rendering. However, it requires many different views of one scene for
training. Moreover, it has poor generalizations to new scenes and requires
retraining or fine-tuning on each scene. In this paper, we develop a new NeRF
model for novel view synthesis using only a single image as input. We propose
to combine the (coarse) planar rendering and the (fine) volume rendering to
achieve higher rendering quality and better generalizations. We also design a
depth teacher net that predicts dense pseudo depth maps to supervise the joint
rendering mechanism and boost the learning of consistent 3D geometry. We
evaluate our method on three challenging datasets. It outperforms
state-of-the-art single-view NeRFs by achieving 5$\sim$20\% improvements in
PSNR and reducing 20$\sim$50\% of the errors in the depth rendering. It also
shows excellent generalization abilities to unseen data without the need to
fine-tune on each new scene.
Related papers
- DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features [65.8738034806085]
DistillNeRF is a self-supervised learning framework for understanding 3D environments in autonomous driving scenes.
Our method is a generalizable feedforward model that predicts a rich neural scene representation from sparse, single-frame multi-view camera inputs.
arXiv Detail & Related papers (2024-06-17T21:15:13Z) - 3D Reconstruction with Generalizable Neural Fields using Scene Priors [71.37871576124789]
We introduce training generalizable Neural Fields incorporating scene Priors (NFPs)
The NFP network maps any single-view RGB-D image into signed distance and radiance values.
A complete scene can be reconstructed by merging individual frames in the volumetric space WITHOUT a fusion module.
arXiv Detail & Related papers (2023-09-26T18:01:02Z) - SimpleNeRF: Regularizing Sparse Input Neural Radiance Fields with
Simpler Solutions [6.9980855647933655]
supervising the depth estimated by the NeRF helps train it effectively with fewer views.
We design augmented models that encourage simpler solutions by exploring the role of positional encoding and view-dependent radiance.
We achieve state-of-the-art view-synthesis performance on two popular datasets by employing the above regularizations.
arXiv Detail & Related papers (2023-09-07T18:02:57Z) - SCADE: NeRFs from Space Carving with Ambiguity-Aware Depth Estimates [16.344734292989504]
SCADE is a novel technique that improves NeRF reconstruction quality on sparse, unconstrained input views.
We propose a new method that learns to predict, for each view, a continuous, multimodal distribution of depth estimates.
Experiments show that our approach enables higher fidelity novel view synthesis from sparse views.
arXiv Detail & Related papers (2023-03-23T18:00:07Z) - X-NeRF: Explicit Neural Radiance Field for Multi-Scene 360$^{\circ} $
Insufficient RGB-D Views [49.55319833743988]
This paper focuses on a rarely discussed but important setting: can we train one model that can represent multiple scenes?
We refer insufficient views to few extremely sparse and almost non-overlapping views.
X-NeRF, a fully explicit approach which learns a general scene completion process instead of a coordinate-based mapping, is proposed.
arXiv Detail & Related papers (2022-10-11T04:29:26Z) - SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single
Image [85.43496313628943]
We present a Single View NeRF (SinNeRF) framework consisting of thoughtfully designed semantic and geometry regularizations.
SinNeRF constructs a semi-supervised learning process, where we introduce and propagate geometry pseudo labels.
Experiments are conducted on complex scene benchmarks, including NeRF synthetic dataset, Local Light Field Fusion dataset, and DTU dataset.
arXiv Detail & Related papers (2022-04-02T19:32:42Z) - Neural Rays for Occlusion-aware Image-based Rendering [108.34004858785896]
We present a new neural representation, called Neural Ray (NeuRay), for the novel view synthesis (NVS) task with multi-view images as input.
NeuRay can quickly generate high-quality novel view rendering images of unseen scenes with little finetuning.
arXiv Detail & Related papers (2021-07-28T15:09:40Z) - Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views
of Novel Scenes [48.0304999503795]
We introduce Stereo Radiance Fields (SRF), a neural view synthesis approach that is trained end-to-end.
SRF generalizes to new scenes, and requires only sparse views at test time.
Experiments show that SRF learns structure instead of overfitting on a scene.
arXiv Detail & Related papers (2021-04-14T15:38:57Z) - pixelNeRF: Neural Radiance Fields from One or Few Images [20.607712035278315]
pixelNeRF is a learning framework that predicts a continuous neural scene representation conditioned on one or few input images.
We conduct experiments on ShapeNet benchmarks for single image novel view synthesis tasks with held-out objects.
In all cases, pixelNeRF outperforms current state-of-the-art baselines for novel view synthesis and single image 3D reconstruction.
arXiv Detail & Related papers (2020-12-03T18:59:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.