ND-SDF: Learning Normal Deflection Fields for High-Fidelity Indoor Reconstruction
- URL: http://arxiv.org/abs/2408.12598v2
- Date: Thu, 26 Sep 2024 06:31:25 GMT
- Title: ND-SDF: Learning Normal Deflection Fields for High-Fidelity Indoor Reconstruction
- Authors: Ziyu Tang, Weicai Ye, Yifan Wang, Di Huang, Hujun Bao, Tong He, Guofeng Zhang,
- Abstract summary: It is non-trivial to simultaneously recover meticulous geometry and preserve smoothness across regions with differing characteristics.
We propose ND-SDF, which learns a Normal Deflection field to represent the angular deviation between the scene normal and the prior normal.
Our method not only obtains smooth weakly textured regions such as walls and floors but also preserves the geometric details of complex structures.
- Score: 50.07671826433922
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural implicit reconstruction via volume rendering has demonstrated its effectiveness in recovering dense 3D surfaces. However, it is non-trivial to simultaneously recover meticulous geometry and preserve smoothness across regions with differing characteristics. To address this issue, previous methods typically employ geometric priors, which are often constrained by the performance of the prior models. In this paper, we propose ND-SDF, which learns a Normal Deflection field to represent the angular deviation between the scene normal and the prior normal. Unlike previous methods that uniformly apply geometric priors on all samples, introducing significant bias in accuracy, our proposed normal deflection field dynamically learns and adapts the utilization of samples based on their specific characteristics, thereby improving both the accuracy and effectiveness of the model. Our method not only obtains smooth weakly textured regions such as walls and floors but also preserves the geometric details of complex structures. In addition, we introduce a novel ray sampling strategy based on the deflection angle to facilitate the unbiased rendering process, which significantly improves the quality and accuracy of intricate surfaces, especially on thin structures. Consistent improvements on various challenging datasets demonstrate the superiority of our method.
Related papers
- InfoNorm: Mutual Information Shaping of Normals for Sparse-View Reconstruction [15.900375207144759]
3D surface reconstruction from multi-view images is essential for scene understanding and interaction.
Recent implicit surface representations, such as Neural Radiance Fields (NeRFs) and signed distance functions (SDFs) employ various geometric priors to resolve the lack of observed information.
We propose regularizing the geometric modeling by explicitly encouraging the mutual information among surface normals of highly correlated scene points.
arXiv Detail & Related papers (2024-07-17T15:46:25Z) - RaNeuS: Ray-adaptive Neural Surface Reconstruction [87.20343320266215]
We leverage a differentiable radiance field eg NeRF to reconstruct detailed 3D surfaces in addition to producing novel view renderings.
Considering that different methods formulate and optimize the projection from SDF to radiance field with a globally constant Eikonal regularization, we improve with a ray-wise weighting factor.
Our proposed textitRaNeuS are extensively evaluated on both synthetic and real datasets.
arXiv Detail & Related papers (2024-06-14T07:54:25Z) - GeoWizard: Unleashing the Diffusion Priors for 3D Geometry Estimation from a Single Image [94.56927147492738]
We introduce GeoWizard, a new generative foundation model designed for estimating geometric attributes from single images.
We show that leveraging diffusion priors can markedly improve generalization, detail preservation, and efficiency in resource usage.
We propose a simple yet effective strategy to segregate the complex data distribution of various scenes into distinct sub-distributions.
arXiv Detail & Related papers (2024-03-18T17:50:41Z) - PSDF: Prior-Driven Neural Implicit Surface Learning for Multi-view
Reconstruction [31.768161784030923]
The framework PSDF is proposed which resorts to external geometric priors from a pretrained MVS network and internal geometric priors inherent in the NISR model.
Experiments on the Tanks and Temples dataset show that PSDF achieves state-of-the-art performance on complex uncontrolled scenes.
arXiv Detail & Related papers (2024-01-23T13:30:43Z) - Indoor Scene Reconstruction with Fine-Grained Details Using Hybrid Representation and Normal Prior Enhancement [50.56517624931987]
The reconstruction of indoor scenes from multi-view RGB images is challenging due to the coexistence of flat and texture-less regions.
Recent methods leverage neural radiance fields aided by predicted surface normal priors to recover the scene geometry.
This work aims to reconstruct high-fidelity surfaces with fine-grained details by addressing the above limitations.
arXiv Detail & Related papers (2023-09-14T12:05:29Z) - DebSDF: Delving into the Details and Bias of Neural Indoor Scene Reconstruction [34.07747722661987]
This paper focuses on the utilization of uncertainty in monocular priors and the bias in SDF-based volume rendering.
We propose an uncertainty modeling technique that associates larger uncertainties with larger errors in the monocular priors.
High-uncertainty priors are then excluded from optimization to prevent bias.
arXiv Detail & Related papers (2023-08-29T18:00:22Z) - HR-NeuS: Recovering High-Frequency Surface Geometry via Neural Implicit
Surfaces [6.382138631957651]
We present High-Resolution NeuS, a novel neural implicit surface reconstruction method.
HR-NeuS recovers high-frequency surface geometry while maintaining large-scale reconstruction accuracy.
We demonstrate through experiments on DTU and BlendedMVS datasets that our approach produces 3D geometries that are qualitatively more detailed and quantitatively of similar accuracy compared to previous approaches.
arXiv Detail & Related papers (2023-02-14T02:25:16Z) - MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface
Reconstruction [72.05649682685197]
State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views.
This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints.
Motivated by recent advances in the area of monocular geometry prediction, we explore the utility these cues provide for improving neural implicit surface reconstruction.
arXiv Detail & Related papers (2022-06-01T17:58:15Z) - Adaptive Surface Normal Constraint for Depth Estimation [102.7466374038784]
We introduce a simple yet effective method, named Adaptive Surface Normal (ASN) constraint, to correlate the depth estimation with geometric consistency.
Our method can faithfully reconstruct the 3D geometry and is robust to local shape variations, such as boundaries, sharp corners and noises.
arXiv Detail & Related papers (2021-03-29T10:36:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.