NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor
Multi-view Stereo
- URL: http://arxiv.org/abs/2109.01129v2
- Date: Fri, 3 Sep 2021 17:50:19 GMT
- Title: NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor
Multi-view Stereo
- Authors: Yi Wei, Shaohui Liu, Yongming Rao, Wang Zhao, Jiwen Lu, Jie Zhou
- Abstract summary: We present a new multi-view depth estimation method that utilizes both conventional SfM reconstruction and learning-based priors.
We show that our proposed framework significantly outperforms state-of-the-art methods on indoor scenes.
- Score: 97.07453889070574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we present a new multi-view depth estimation method that
utilizes both conventional SfM reconstruction and learning-based priors over
the recently proposed neural radiance fields (NeRF). Unlike existing neural
network based optimization method that relies on estimated correspondences, our
method directly optimizes over implicit volumes, eliminating the challenging
step of matching pixels in indoor scenes. The key to our approach is to utilize
the learning-based priors to guide the optimization process of NeRF. Our system
firstly adapts a monocular depth network over the target scene by finetuning on
its sparse SfM reconstruction. Then, we show that the shape-radiance ambiguity
of NeRF still exists in indoor environments and propose to address the issue by
employing the adapted depth priors to monitor the sampling process of volume
rendering. Finally, a per-pixel confidence map acquired by error computation on
the rendered image can be used to further improve the depth quality.
Experiments show that our proposed framework significantly outperforms
state-of-the-art methods on indoor scenes, with surprising findings presented
on the effectiveness of correspondence-based optimization and NeRF-based
optimization over the adapted depth priors. In addition, we show that the
guided optimization scheme does not sacrifice the original synthesis capability
of neural radiance fields, improving the rendering quality on both seen and
novel views. Code is available at https://github.com/weiyithu/NerfingMVS.
Related papers
- PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields [54.8553158441296]
We propose a novel visual localization framework, ie, PNeRFLoc, based on a unified point-based representation.
On the one hand, PNeRFLoc supports the initial pose estimation by matching 2D and 3D feature points.
On the other hand, it also enables pose refinement with novel view synthesis using rendering-based optimization.
arXiv Detail & Related papers (2023-12-17T08:30:00Z) - VoxNeRF: Bridging Voxel Representation and Neural Radiance Fields for Enhanced Indoor View Synthesis [73.50359502037232]
VoxNeRF is a novel approach to enhance the quality and efficiency of neural indoor reconstruction and novel view synthesis.
We propose an efficient voxel-guided sampling technique that allocates computational resources to selectively the most relevant segments of rays.
Our approach is validated with extensive experiments on ScanNet and ScanNet++.
arXiv Detail & Related papers (2023-11-09T11:32:49Z) - BID-NeRF: RGB-D image pose estimation with inverted Neural Radiance
Fields [0.0]
We aim to improve the Inverted Neural Radiance Fields (iNeRF) algorithm which defines the image pose estimation problem as a NeRF based iterative linear optimization.
NeRFs are novel neural space representation models that can synthesize photorealistic novel views of real-world scenes or objects.
arXiv Detail & Related papers (2023-10-05T14:27:06Z) - MIPS-Fusion: Multi-Implicit-Submaps for Scalable and Robust Online
Neural RGB-D Reconstruction [15.853932110058585]
We introduce a robust and scalable online RGB-D reconstruction method based on a novel neural implicit representation -- multi-implicit-submap.
In our method, neural submaps are incrementally allocated alongside the scanning trajectory and efficiently learned with local neural bundle adjustments.
For the first time, randomized optimization is made possible in neural tracking with several key designs to the learning process, enabling efficient and robust tracking even under fast camera motions.
arXiv Detail & Related papers (2023-08-17T02:33:16Z) - DARF: Depth-Aware Generalizable Neural Radiance Field [51.29437249009986]
We propose the Depth-Aware Generalizable Neural Radiance Field (DARF) with a Depth-Aware Dynamic Sampling (DADS) strategy.
Our framework infers the unseen scenes on both pixel level and geometry level with only a few input images.
Compared with state-of-the-art generalizable NeRF methods, DARF reduces samples by 50%, while improving rendering quality and depth estimation.
arXiv Detail & Related papers (2022-12-05T14:00:59Z) - Riggable 3D Face Reconstruction via In-Network Optimization [58.016067611038046]
This paper presents a method for riggable 3D face reconstruction from monocular images.
It jointly estimates a personalized face rig and per-image parameters including expressions, poses, and illuminations.
Experiments demonstrate that our method achieves SOTA reconstruction accuracy, reasonable robustness and generalization ability.
arXiv Detail & Related papers (2021-04-08T03:53:20Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.