Dense Depth Priors for Neural Radiance Fields from Sparse Input Views
- URL: http://arxiv.org/abs/2112.03288v1
- Date: Mon, 6 Dec 2021 19:00:02 GMT
- Title: Dense Depth Priors for Neural Radiance Fields from Sparse Input Views
- Authors: Barbara Roessle, Jonathan T. Barron, Ben Mildenhall, Pratul P.
Srinivasan, Matthias Nie{\ss}ner
- Abstract summary: We propose a method to synthesize novel views of whole rooms from an order of magnitude fewer images.
Our method enables data-efficient novel view synthesis on challenging indoor scenes, using as few as 18 images for an entire scene.
- Score: 37.92064060160628
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural radiance fields (NeRF) encode a scene into a neural representation
that enables photo-realistic rendering of novel views. However, a successful
reconstruction from RGB images requires a large number of input views taken
under static conditions - typically up to a few hundred images for room-size
scenes. Our method aims to synthesize novel views of whole rooms from an order
of magnitude fewer images. To this end, we leverage dense depth priors in order
to constrain the NeRF optimization. First, we take advantage of the sparse
depth data that is freely available from the structure from motion (SfM)
preprocessing step used to estimate camera poses. Second, we use depth
completion to convert these sparse points into dense depth maps and uncertainty
estimates, which are used to guide NeRF optimization. Our method enables
data-efficient novel view synthesis on challenging indoor scenes, using as few
as 18 images for an entire scene.
Related papers
- Enhancing Neural Radiance Fields with Depth and Normal Completion Priors from Sparse Views [8.533926962066305]
Neural Radiance Fields (NeRF) create highly realistic images by learning about scenes through a neural network model.
NeRF often encounters issues when there are not enough images to work with, leading to problems in accurately rendering views.
This framework enhances view rendering by adding depth and normal dense completion priors to the NeRF optimization process.
arXiv Detail & Related papers (2024-07-08T07:03:22Z) - SceneRF: Self-Supervised Monocular 3D Scene Reconstruction with Radiance
Fields [19.740018132105757]
SceneRF is a self-supervised monocular scene reconstruction method using only posed image sequences for training.
At inference, a single input image suffices to hallucinate novel depth views which are fused together to obtain 3D scene reconstruction.
arXiv Detail & Related papers (2022-12-05T18:59:57Z) - DINER: Depth-aware Image-based NEural Radiance fields [45.63488428831042]
We present Depth-aware Image-based NEural Radiance fields (DINER)
Given a sparse set of RGB input views, we predict depth and feature maps to guide the reconstruction of a scene representation.
We propose novel techniques to incorporate depth information into feature fusion and efficient scene sampling.
arXiv Detail & Related papers (2022-11-29T23:22:44Z) - SPARF: Neural Radiance Fields from Sparse and Noisy Poses [58.528358231885846]
We introduce Sparse Pose Adjusting Radiance Field (SPARF) to address the challenge of novel-view synthesis.
Our approach exploits multi-view geometry constraints in order to jointly learn the NeRF and refine the camera poses.
arXiv Detail & Related papers (2022-11-21T18:57:47Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - Efficient Neural Radiance Fields with Learned Depth-Guided Sampling [43.79307270743013]
We present a hybrid scene representation which combines the best of implicit radiance fields and explicit depth maps for efficient rendering.
Experiments show that the proposed approach exhibits state-of-the-art performance on the DTU, Real Forward-facing and NeRF Synthetic datasets.
We also demonstrate the capability of our method to synthesize free-viewpoint videos of dynamic human performers in real-time.
arXiv Detail & Related papers (2021-12-02T18:59:32Z) - NeMI: Unifying Neural Radiance Fields with Multiplane Images for Novel
View Synthesis [69.19261797333635]
We propose an approach to perform novel view synthesis and depth estimation via dense 3D reconstruction from a single image.
Our NeMI unifies Neural radiance fields (NeRF) with Multiplane Images (MPI)
We also achieve competitive results in depth estimation on iBims-1 and NYU-v2 without annotated depth supervision.
arXiv Detail & Related papers (2021-03-27T13:41:00Z) - D-NeRF: Neural Radiance Fields for Dynamic Scenes [72.75686949608624]
We introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain.
D-NeRF reconstructs images of objects under rigid and non-rigid motions from a camera moving around the scene.
We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions.
arXiv Detail & Related papers (2020-11-27T19:06:50Z) - NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis [78.5281048849446]
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes.
Our algorithm represents a scene using a fully-connected (non-convolutional) deep network.
Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses.
arXiv Detail & Related papers (2020-03-19T17:57:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.