NeRFPrior: Learning Neural Radiance Field as a Prior for Indoor Scene Reconstruction
- URL: http://arxiv.org/abs/2503.18361v2
- Date: Sun, 30 Mar 2025 04:43:37 GMT
- Title: NeRFPrior: Learning Neural Radiance Field as a Prior for Indoor Scene Reconstruction
- Authors: Wenyuan Zhang, Emily Yue-ting Jia, Junsheng Zhou, Baorui Ma, Kanle Shi, Yu-Shen Liu, Zhizhong Han,
- Abstract summary: We present NeRFPrior, which adopts a neural radiance field as a prior to learn signed distance fields.<n>Our NeRF prior can provide both geometric and color clues, and also get trained fast under the same scene without additional data.
- Score: 46.776602829615115
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, it has shown that priors are vital for neural implicit functions to reconstruct high-quality surfaces from multi-view RGB images. However, current priors require large-scale pre-training, and merely provide geometric clues without considering the importance of color. In this paper, we present NeRFPrior, which adopts a neural radiance field as a prior to learn signed distance fields using volume rendering for surface reconstruction. Our NeRF prior can provide both geometric and color clues, and also get trained fast under the same scene without additional data. Based on the NeRF prior, we are enabled to learn a signed distance function (SDF) by explicitly imposing a multi-view consistency constraint on each ray intersection for surface inference. Specifically, at each ray intersection, we use the density in the prior as a coarse geometry estimation, while using the color near the surface as a clue to check its visibility from another view angle. For the textureless areas where the multi-view consistency constraint does not work well, we further introduce a depth consistency loss with confidence weights to infer the SDF. Our experimental results outperform the state-of-the-art methods under the widely used benchmarks.
Related papers
- Indoor Scene Reconstruction with Fine-Grained Details Using Hybrid Representation and Normal Prior Enhancement [50.56517624931987]
The reconstruction of indoor scenes from multi-view RGB images is challenging due to the coexistence of flat and texture-less regions.
Recent methods leverage neural radiance fields aided by predicted surface normal priors to recover the scene geometry.
This work aims to reconstruct high-fidelity surfaces with fine-grained details by addressing the above limitations.
arXiv Detail & Related papers (2023-09-14T12:05:29Z) - Unsupervised Inference of Signed Distance Functions from Single Sparse
Point Clouds without Learning Priors [54.966603013209685]
It is vital to infer signed distance functions (SDFs) from 3D point clouds.
We present a neural network to directly infer SDFs from single sparse point clouds.
arXiv Detail & Related papers (2023-03-25T15:56:50Z) - DiffusioNeRF: Regularizing Neural Radiance Fields with Denoising
Diffusion Models [5.255302402546892]
We learn a prior over scene geometry and color, using a denoising diffusion model (DDM)
We show that, these gradients of logarithms of RGBD patch priors serve to regularize geometry and color of a scene.
Evaluations on LLFF, the most relevant dataset, show that our learned prior achieves improved quality in the reconstructed geometry and improved to novel views.
arXiv Detail & Related papers (2023-02-23T18:52:28Z) - DARF: Depth-Aware Generalizable Neural Radiance Field [51.29437249009986]
We propose the Depth-Aware Generalizable Neural Radiance Field (DARF) with a Depth-Aware Dynamic Sampling (DADS) strategy.<n>Our framework infers the unseen scenes on both pixel level and geometry level with only a few input images.<n>Compared with state-of-the-art generalizable NeRF methods, DARF reduces samples by 50%, while improving rendering quality and depth estimation.
arXiv Detail & Related papers (2022-12-05T14:00:59Z) - CAP-UDF: Learning Unsigned Distance Functions Progressively from Raw Point Clouds with Consistency-Aware Field Optimization [54.69408516025872]
CAP-UDF is a novel method to learn consistency-aware UDF from raw point clouds.
We train a neural network to gradually infer the relationship between queries and the approximated surface.
We also introduce a polygonization algorithm to extract surfaces using the gradients of the learned UDF.
arXiv Detail & Related papers (2022-10-06T08:51:08Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - Neural RGB-D Surface Reconstruction [15.438678277705424]
Methods which learn a neural radiance field have shown amazing image synthesis results, but the underlying geometry representation is only a coarse approximation of the real geometry.
We demonstrate how depth measurements can be incorporated into the radiance field formulation to produce more detailed and complete reconstruction results.
arXiv Detail & Related papers (2021-04-09T18:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.