Depth-supervised NeRF: Fewer Views and Faster Training for Free
- URL: http://arxiv.org/abs/2107.02791v1
- Date: Tue, 6 Jul 2021 17:58:35 GMT
- Title: Depth-supervised NeRF: Fewer Views and Faster Training for Free
- Authors: Kangle Deng, Andrew Liu, Jun-Yan Zhu, and Deva Ramanan
- Abstract summary: DS-NeRF is a loss for learning neural radiance fields that takes advantage of readily-available depth supervision.
We find that DS-NeRF can render more accurate images given fewer training views while training 2-6x faster.
- Score: 66.16386801362643
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One common failure mode of Neural Radiance Field (NeRF) models is fitting
incorrect geometries when given an insufficient number of input views. We
propose DS-NeRF (Depth-supervised Neural Radiance Fields), a loss for learning
neural radiance fields that takes advantage of readily-available depth
supervision. Our key insight is that sparse depth supervision can be used to
regularize the learned geometry, a crucial component for effectively rendering
novel views using NeRF. We exploit the fact that current NeRF pipelines require
images with known camera poses that are typically estimated by running
structure-from-motion (SFM). Crucially, SFM also produces sparse 3D points that
can be used as ``free" depth supervision during training: we simply add a loss
to ensure that depth rendered along rays that intersect these 3D points is
close to the observed depth. We find that DS-NeRF can render more accurate
images given fewer training views while training 2-6x faster. With only two
training views on real-world images, DS-NeRF significantly outperforms NeRF as
well as other sparse-view variants. We show that our loss is compatible with
these NeRF models, demonstrating that depth is a cheap and easily digestible
supervisory signal. Finally, we show that DS-NeRF supports other types of depth
supervision such as scanned depth sensors and RGBD reconstruction outputs.
Related papers
- TD-NeRF: Novel Truncated Depth Prior for Joint Camera Pose and Neural Radiance Field Optimization [19.73020713365866]
The reliance on accurate camera poses is a significant barrier to the widespread deployment of Neural Radiance Fields (NeRF) models for 3D reconstruction and SLAM tasks.
The existing method introduces monocular depth priors to jointly optimize the camera poses and NeRF, which fails to fully exploit the depth priors and neglects the impact of their inherent noise.
We propose Truncated Depth NeRF (TD-NeRF), a novel approach that enables training NeRF from unknown camera poses - by jointly optimizing learnable parameters of the radiance field and camera poses.
arXiv Detail & Related papers (2024-05-11T14:57:42Z) - SimpleNeRF: Regularizing Sparse Input Neural Radiance Fields with
Simpler Solutions [6.9980855647933655]
supervising the depth estimated by the NeRF helps train it effectively with fewer views.
We design augmented models that encourage simpler solutions by exploring the role of positional encoding and view-dependent radiance.
We achieve state-of-the-art view-synthesis performance on two popular datasets by employing the above regularizations.
arXiv Detail & Related papers (2023-09-07T18:02:57Z) - ViP-NeRF: Visibility Prior for Sparse Input Neural Radiance Fields [9.67057831710618]
Training neural radiance fields (NeRFs) on sparse input views leads to overfitting and incorrect scene depth estimation.
We reformulate the NeRF to also directly output the visibility of a 3D point from a given viewpoint to reduce the training time with the visibility constraint.
Our model outperforms the competing sparse input NeRF models including those that use learned priors.
arXiv Detail & Related papers (2023-04-28T18:26:23Z) - NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry
Scaffolds [60.1382112938132]
We present NeRFVS, a novel neural radiance fields (NeRF) based method to enable free navigation in a room.
NeRF achieves impressive performance in rendering images for novel views similar to the input views while suffering for novel views that are significantly different from the training views.
arXiv Detail & Related papers (2023-04-13T06:40:08Z) - SparseNeRF: Distilling Depth Ranking for Few-shot Novel View Synthesis [93.46963803030935]
We present a new Sparse-view NeRF (SparseNeRF) framework that exploits depth priors from real-world inaccurate observations.
We propose a simple yet effective constraint, a local depth ranking method, on NeRFs such that the expected depth ranking of the NeRF is consistent with that of the coarse depth maps in local patches.
We also collect a new dataset NVS-RGBD that contains real-world depth maps from Azure Kinect, ZED 2, and iPhone 13 Pro.
arXiv Detail & Related papers (2023-03-28T17:58:05Z) - Clean-NeRF: Reformulating NeRF to account for View-Dependent
Observations [67.54358911994967]
This paper proposes Clean-NeRF for accurate 3D reconstruction and novel view rendering in complex scenes.
Clean-NeRF can be implemented as a plug-in that can immediately benefit existing NeRF-based methods without additional input.
arXiv Detail & Related papers (2023-03-26T12:24:31Z) - AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware
Training [100.33713282611448]
We conduct the first pilot study on training NeRF with high-resolution data.
We propose the corresponding solutions, including marrying the multilayer perceptron with convolutional layers.
Our approach is nearly free without introducing obvious training/testing costs.
arXiv Detail & Related papers (2022-11-17T17:22:28Z) - Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level
Physically-Grounded Augmentations [111.08941206369508]
We propose Augmented NeRF (Aug-NeRF), which for the first time brings the power of robust data augmentations into regularizing the NeRF training.
Our proposal learns to seamlessly blend worst-case perturbations into three distinct levels of the NeRF pipeline.
Aug-NeRF effectively boosts NeRF performance in both novel view synthesis and underlying geometry reconstruction.
arXiv Detail & Related papers (2022-07-04T02:27:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.