BayesSDF: Surface-Based Laplacian Uncertainty Estimation for 3D Geometry with Neural Signed Distance Fields
- URL: http://arxiv.org/abs/2507.06269v2
- Date: Mon, 14 Jul 2025 15:52:55 GMT
- Title: BayesSDF: Surface-Based Laplacian Uncertainty Estimation for 3D Geometry with Neural Signed Distance Fields
- Authors: Rushil Desai,
- Abstract summary: We introduce BayesSDF, a novel framework for uncertainty quantification in neural implicit SDF models.<n>BayesSDF is motivated by scientific simulation applications with 3D environments such as forests.<n>We show that BayesSDF outperforms existing methods in both calibration and geometric consistency.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantifying uncertainty in neural implicit 3D representations, particularly those utilizing Signed Distance Functions (SDFs), remains a substantial challenge due to computational inefficiencies, scalability issues, and geometric inconsistencies. Existing methods typically neglect direct geometric integration, leading to poorly calibrated uncertainty maps. We introduce BayesSDF, a novel probabilistic framework for uncertainty quantification in neural implicit SDF models, motivated by scientific simulation applications with 3D environments (e.g., forests) such as modeling fluid flow through forests, where precise surface geometry and reliable uncertainty estimates are essential. Unlike radiance-based models such as Neural Radiance Fields (NeRF) or 3D Gaussian splatting, which lack explicit surface formulations, Signed Distance Functions (SDFs) define continuous and differentiable geometry, making them better suited for physical modeling and analysis. BayesSDF leverages a Laplace approximation to quantify local surface instability using Hessian-based metrics, enabling efficient, surfaceaware uncertainty estimation. Our method shows that uncertainty predictions correspond closely with poorly reconstructed geometry, providing actionable confidence measures for downstream use. Extensive evaluations on synthetic and real-world datasets demonstrate that BayesSDF outperforms existing methods in both calibration and geometric consistency, establishing a strong foundation for uncertainty-aware 3D scene reconstruction, simulation, and robotic decision-making.
Related papers
- Perfecting Depth: Uncertainty-Aware Enhancement of Metric Depth [33.61994004497114]
We propose a novel two-stage framework for sensor depth enhancement, called Perfecting Depth.<n>This framework leverages the nature of diffusion models to automatically detect unreliable depth regions while preserving geometric cues.<n>Our framework sets a new baseline for sensor depth enhancement, with potential applications in autonomous driving, robotics, and immersive technologies.
arXiv Detail & Related papers (2025-06-05T04:09:11Z) - Thin-Shell-SfT: Fine-Grained Monocular Non-rigid 3D Surface Tracking with Neural Deformation Fields [66.1612475655465]
3D reconstruction of deformable surfaces from RGB videos is a challenging problem.<n>Existing methods use deformation models with statistical, neural, or physical priors.<n>We propose ThinShell-SfT, a new method for non-rigid 3D tracking meshes.
arXiv Detail & Related papers (2025-03-25T18:00:46Z) - ND-SDF: Learning Normal Deflection Fields for High-Fidelity Indoor Reconstruction [50.07671826433922]
It is non-trivial to simultaneously recover meticulous geometry and preserve smoothness across regions with differing characteristics.<n>We propose ND-SDF, which learns a Normal Deflection field to represent the angular deviation between the scene normal and the prior normal.<n>Our method not only obtains smooth weakly textured regions such as walls and floors but also preserves the geometric details of complex structures.
arXiv Detail & Related papers (2024-08-22T17:59:01Z) - Deep Modeling of Non-Gaussian Aleatoric Uncertainty [4.969887562291159]
Deep learning offers promising new ways to accurately model aleatoric uncertainty in robotic state estimation systems.<n>In this study, we formulate and evaluate three fundamental deep learning approaches for conditional probability density modeling.<n>Our results show that these deep learning methods can accurately capture complex uncertainty patterns, highlighting their potential for improving the reliability and robustness of estimation systems.
arXiv Detail & Related papers (2024-05-30T22:13:17Z) - PhyRecon: Physically Plausible Neural Scene Reconstruction [81.73129450090684]
We introduce PHYRECON, the first approach to leverage both differentiable rendering and differentiable physics simulation to learn implicit surface representations.
Central to this design is an efficient transformation between SDF-based implicit representations and explicit surface points.
Our results also exhibit superior physical stability in physical simulators, with at least a 40% improvement across all datasets.
arXiv Detail & Related papers (2024-04-25T15:06:58Z) - Bayesian NeRF: Quantifying Uncertainty with Volume Density for Neural Implicit Fields [1.199955563466263]
We present a Bayesian Neural Radiance Field (NeRF), which explicitly quantifies uncertainty in the volume density by modeling uncertainty in the occupancy.<n>NeRF diverges from traditional geometric methods by providing an enriched scene representation, rendering color and density in 3D space from various viewpoints.<n>We show that our method significantly enhances performance on RGB and depth images in a comprehensive dataset.
arXiv Detail & Related papers (2024-04-10T04:24:42Z) - FILP-3D: Enhancing 3D Few-shot Class-incremental Learning with Pre-trained Vision-Language Models [59.13757801286343]
Few-shot class-incremental learning aims to mitigate the catastrophic forgetting issue when a model is incrementally trained on limited data.<n>We introduce the FILP-3D framework with two novel components: the Redundant Feature Eliminator (RFE) for feature space misalignment and the Spatial Noise Compensator (SNC) for significant noise.
arXiv Detail & Related papers (2023-12-28T14:52:07Z) - GUPNet++: Geometry Uncertainty Propagation Network for Monocular 3D Object Detection [92.41859045360532]
We propose a novel Geometry Uncertainty Propagation Network (GUPNet++)<n>It models the uncertainty propagation relationship of the geometry projection during training, improving the stability and efficiency of the end-to-end model learning.<n> Experiments show that the proposed approach not only obtains (state-of-the-art) SOTA performance in image-based monocular 3D detection but also demonstrates superiority in efficacy with a simplified framework.
arXiv Detail & Related papers (2023-10-24T08:45:15Z) - Strategic Geosteeering Workflow with Uncertainty Quantification and Deep
Learning: A Case Study on the Goliat Field [0.0]
This paper presents a practical workflow consisting of offline and online phases.
The offline phase includes training and building of an uncertain prior near-well geo-model.
The online phase uses the flexible iterative ensemble smoother (FlexIES) to perform real-time assimilation of extra-deep electromagnetic data.
arXiv Detail & Related papers (2022-10-27T15:38:26Z) - {\phi}-SfT: Shape-from-Template with a Physics-Based Deformation Model [69.27632025495512]
Shape-from-Template (SfT) methods estimate 3D surface deformations from a single monocular RGB camera.
This paper proposes a new SfT approach explaining 2D observations through physical simulations.
arXiv Detail & Related papers (2022-03-22T17:59:57Z) - Variational State-Space Models for Localisation and Dense 3D Mapping in
6 DoF [17.698319441265223]
We solve the problem of 6-DoF localisation and 3D dense reconstruction in spatial environments as approximate Bayesian inference in a deep state-space model.
This results in an expressive predictive model of the world, often missing in current state-of-the-art visual SLAM solutions.
We evaluate our approach on realistic unmanned aerial vehicle flight data, nearing the performance of state-of-the-art visual-inertial odometry systems.
arXiv Detail & Related papers (2020-06-17T22:06:35Z) - Semi-supervised deep learning for high-dimensional uncertainty
quantification [6.910275451003041]
This paper presents a semi-supervised learning framework for dimension reduction and reliability analysis.
An autoencoder is first adopted for mapping the high-dimensional space into a low-dimensional latent space.
A deep feedforward neural network is utilized to learn the mapping relationship and reconstruct the latent space.
arXiv Detail & Related papers (2020-06-01T15:15:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.