NC-SDF: Enhancing Indoor Scene Reconstruction Using Neural SDFs with View-Dependent Normal Compensation
- URL: http://arxiv.org/abs/2405.00340v1
- Date: Wed, 1 May 2024 06:26:35 GMT
- Title: NC-SDF: Enhancing Indoor Scene Reconstruction Using Neural SDFs with View-Dependent Normal Compensation
- Authors: Ziyi Chen, Xiaolong Wu, Yu Zhang,
- Abstract summary: We present NC-SDF, a neural signed distance field (SDF) 3D reconstruction framework with view-dependent normal compensation (NC)
By adaptively learning and correcting the biases, our NC-SDF effectively mitigates the adverse impact of inconsistent supervision.
Experiments on synthetic and real-world datasets demonstrate that NC-SDF outperforms existing approaches in terms of reconstruction quality.
- Score: 13.465401006826294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: State-of-the-art neural implicit surface representations have achieved impressive results in indoor scene reconstruction by incorporating monocular geometric priors as additional supervision. However, we have observed that multi-view inconsistency between such priors poses a challenge for high-quality reconstructions. In response, we present NC-SDF, a neural signed distance field (SDF) 3D reconstruction framework with view-dependent normal compensation (NC). Specifically, we integrate view-dependent biases in monocular normal priors into the neural implicit representation of the scene. By adaptively learning and correcting the biases, our NC-SDF effectively mitigates the adverse impact of inconsistent supervision, enhancing both the global consistency and local details in the reconstructions. To further refine the details, we introduce an informative pixel sampling strategy to pay more attention to intricate geometry with higher information content. Additionally, we design a hybrid geometry modeling approach to improve the neural implicit representation. Experiments on synthetic and real-world datasets demonstrate that NC-SDF outperforms existing approaches in terms of reconstruction quality.
Related papers
- ND-SDF: Learning Normal Deflection Fields for High-Fidelity Indoor Reconstruction [50.07671826433922]
It is non-trivial to simultaneously recover meticulous geometry and preserve smoothness across regions with differing characteristics.
We propose ND-SDF, which learns a Normal Deflection field to represent the angular deviation between the scene normal and the prior normal.
Our method not only obtains smooth weakly textured regions such as walls and floors but also preserves the geometric details of complex structures.
arXiv Detail & Related papers (2024-08-22T17:59:01Z) - NeuRodin: A Two-stage Framework for High-Fidelity Neural Surface Reconstruction [63.85586195085141]
Signed Distance Function (SDF)-based volume rendering has demonstrated significant capabilities in surface reconstruction.
We introduce NeuRodin, a novel two-stage neural surface reconstruction framework.
NeuRodin achieves high-fidelity surface reconstruction and retains the flexible optimization characteristics of density-based methods.
arXiv Detail & Related papers (2024-08-19T17:36:35Z) - RaNeuS: Ray-adaptive Neural Surface Reconstruction [87.20343320266215]
We leverage a differentiable radiance field eg NeRF to reconstruct detailed 3D surfaces in addition to producing novel view renderings.
Considering that different methods formulate and optimize the projection from SDF to radiance field with a globally constant Eikonal regularization, we improve with a ray-wise weighting factor.
Our proposed textitRaNeuS are extensively evaluated on both synthetic and real datasets.
arXiv Detail & Related papers (2024-06-14T07:54:25Z) - PSDF: Prior-Driven Neural Implicit Surface Learning for Multi-view
Reconstruction [31.768161784030923]
The framework PSDF is proposed which resorts to external geometric priors from a pretrained MVS network and internal geometric priors inherent in the NISR model.
Experiments on the Tanks and Temples dataset show that PSDF achieves state-of-the-art performance on complex uncontrolled scenes.
arXiv Detail & Related papers (2024-01-23T13:30:43Z) - Indoor Scene Reconstruction with Fine-Grained Details Using Hybrid Representation and Normal Prior Enhancement [50.56517624931987]
The reconstruction of indoor scenes from multi-view RGB images is challenging due to the coexistence of flat and texture-less regions.
Recent methods leverage neural radiance fields aided by predicted surface normal priors to recover the scene geometry.
This work aims to reconstruct high-fidelity surfaces with fine-grained details by addressing the above limitations.
arXiv Detail & Related papers (2023-09-14T12:05:29Z) - DebSDF: Delving into the Details and Bias of Neural Indoor Scene Reconstruction [34.07747722661987]
This paper focuses on the utilization of uncertainty in monocular priors and the bias in SDF-based volume rendering.
We propose an uncertainty modeling technique that associates larger uncertainties with larger errors in the monocular priors.
High-uncertainty priors are then excluded from optimization to prevent bias.
arXiv Detail & Related papers (2023-08-29T18:00:22Z) - MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface
Reconstruction [72.05649682685197]
State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views.
This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints.
Motivated by recent advances in the area of monocular geometry prediction, we explore the utility these cues provide for improving neural implicit surface reconstruction.
arXiv Detail & Related papers (2022-06-01T17:58:15Z) - NeuS: Learning Neural Implicit Surfaces by Volume Rendering for
Multi-view Reconstruction [88.02850205432763]
We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs.
Existing neural surface reconstruction approaches, such as DVR and IDR, require foreground mask as supervision.
We observe that the conventional volume rendering method causes inherent geometric errors for surface reconstruction.
We propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision.
arXiv Detail & Related papers (2021-06-20T12:59:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.