BayesSDF: Surface-Based Laplacian Uncertainty Estimation for 3D Geometry with Neural Signed Distance Fields
- URL: http://arxiv.org/abs/2507.06269v3
- Date: Thu, 04 Sep 2025 18:52:33 GMT
- Title: BayesSDF: Surface-Based Laplacian Uncertainty Estimation for 3D Geometry with Neural Signed Distance Fields
- Authors: Rushil Desai,
- Abstract summary: BayesSDF is a novel probabilistic framework for uncertainty estimation in neural implicit 3D representations.<n>By enabling surface-aware uncertainty quantification, BayesSDF lays the groundwork for more robust, interpretable, and actionable 3D perception systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate surface estimation is critical for downstream tasks in scientific simulation, and quantifying uncertainty in implicit neural 3D representations still remains a substantial challenge due to computational inefficiencies, scalability issues, and geometric inconsistencies. However, current neural implicit surface models do not offer a principled way to quantify uncertainty, limiting their reliability in real-world applications. Inspired by recent probabilistic rendering approaches, we introduce BayesSDF, a novel probabilistic framework for uncertainty estimation in neural implicit 3D representations. Unlike radiance-based models such as Neural Radiance Fields (NeRF) or 3D Gaussian Splatting, Signed Distance Functions (SDFs) provide continuous, differentiable surface representations, making them especially well-suited for uncertainty-aware modeling. BayesSDF applies a Laplace approximation over SDF weights and derives Hessian-based metrics to estimate local geometric instability. We empirically demonstrate that these uncertainty estimates correlate strongly with surface reconstruction error across both synthetic and real-world benchmarks. By enabling surface-aware uncertainty quantification, BayesSDF lays the groundwork for more robust, interpretable, and actionable 3D perception systems.
Related papers
- Machine learning assisted state prediction of misspecified linear dynamical system via modal reduction [0.0]
Parametric models with fixed nominal parameters often omit critical physical effects due to simplifications in geometry, material behavior, damping, or boundary conditions.<n>This work introduces a comprehensive framework for MFE estimation and correction in high-dimensional finite element based structural dynamical systems.<n>To ensure computational tractability, the FE system is projected onto a reduced modal basis, and a mesh-invariant neural network maps modal states to discrepancy estimates.
arXiv Detail & Related papers (2026-01-08T10:14:27Z) - NeuralSSD: A Neural Solver for Signed Distance Surface Reconstruction [34.55776349064238]
Implicit method is preferred due to its ability to accurately represent shapes and its robustness in handling topological changes.<n>We propose a novel energy equation that balances the reliability of point cloud information.<n>We introduce a new convolutional network that learns three-dimensional information to achieve superior optimization results.
arXiv Detail & Related papers (2025-11-18T09:20:15Z) - Adaptive Dual Uncertainty Optimization: Boosting Monocular 3D Object Detection under Test-Time Shifts [80.32933059529135]
Test-Time Adaptation (TTA) methods have emerged to adapt to target distributions during inference.<n>We propose Dual Uncertainty Optimization (DUO), the first TTA framework designed to jointly minimize both uncertainties for robust M3OD.<n>In parallel, we design a semantic-aware normal field constraint that preserves geometric coherence in regions with clear semantic cues.
arXiv Detail & Related papers (2025-08-28T07:09:21Z) - Perfecting Depth: Uncertainty-Aware Enhancement of Metric Depth [33.61994004497114]
We propose a novel two-stage framework for sensor depth enhancement, called Perfecting Depth.<n>This framework leverages the nature of diffusion models to automatically detect unreliable depth regions while preserving geometric cues.<n>Our framework sets a new baseline for sensor depth enhancement, with potential applications in autonomous driving, robotics, and immersive technologies.
arXiv Detail & Related papers (2025-06-05T04:09:11Z) - Thin-Shell-SfT: Fine-Grained Monocular Non-rigid 3D Surface Tracking with Neural Deformation Fields [66.1612475655465]
3D reconstruction of deformable surfaces from RGB videos is a challenging problem.<n>Existing methods use deformation models with statistical, neural, or physical priors.<n>We propose ThinShell-SfT, a new method for non-rigid 3D tracking meshes.
arXiv Detail & Related papers (2025-03-25T18:00:46Z) - 3D Equivariant Pose Regression via Direct Wigner-D Harmonics Prediction [50.07071392673984]
Existing methods learn 3D rotations parametrized in the spatial domain using angles or quaternions.
We propose a frequency-domain approach that directly predicts Wigner-D coefficients for 3D rotation regression.
Our method achieves state-of-the-art results on benchmarks such as ModelNet10-SO(3) and PASCAL3D+.
arXiv Detail & Related papers (2024-11-01T12:50:38Z) - ND-SDF: Learning Normal Deflection Fields for High-Fidelity Indoor Reconstruction [50.07671826433922]
It is non-trivial to simultaneously recover meticulous geometry and preserve smoothness across regions with differing characteristics.<n>We propose ND-SDF, which learns a Normal Deflection field to represent the angular deviation between the scene normal and the prior normal.<n>Our method not only obtains smooth weakly textured regions such as walls and floors but also preserves the geometric details of complex structures.
arXiv Detail & Related papers (2024-08-22T17:59:01Z) - Deep Modeling of Non-Gaussian Aleatoric Uncertainty [4.969887562291159]
Deep learning offers promising new ways to accurately model aleatoric uncertainty in robotic state estimation systems.<n>In this study, we formulate and evaluate three fundamental deep learning approaches for conditional probability density modeling.<n>Our results show that these deep learning methods can accurately capture complex uncertainty patterns, highlighting their potential for improving the reliability and robustness of estimation systems.
arXiv Detail & Related papers (2024-05-30T22:13:17Z) - PhyRecon: Physically Plausible Neural Scene Reconstruction [81.73129450090684]
We introduce PHYRECON, the first approach to leverage both differentiable rendering and differentiable physics simulation to learn implicit surface representations.
Central to this design is an efficient transformation between SDF-based implicit representations and explicit surface points.
Our results also exhibit superior physical stability in physical simulators, with at least a 40% improvement across all datasets.
arXiv Detail & Related papers (2024-04-25T15:06:58Z) - Bayesian NeRF: Quantifying Uncertainty with Volume Density for Neural Implicit Fields [1.199955563466263]
We present a Bayesian Neural Radiance Field (NeRF), which explicitly quantifies uncertainty in the volume density by modeling uncertainty in the occupancy.<n>NeRF diverges from traditional geometric methods by providing an enriched scene representation, rendering color and density in 3D space from various viewpoints.<n>We show that our method significantly enhances performance on RGB and depth images in a comprehensive dataset.
arXiv Detail & Related papers (2024-04-10T04:24:42Z) - FILP-3D: Enhancing 3D Few-shot Class-incremental Learning with Pre-trained Vision-Language Models [59.13757801286343]
Few-shot class-incremental learning aims to mitigate the catastrophic forgetting issue when a model is incrementally trained on limited data.<n>We introduce the FILP-3D framework with two novel components: the Redundant Feature Eliminator (RFE) for feature space misalignment and the Spatial Noise Compensator (SNC) for significant noise.
arXiv Detail & Related papers (2023-12-28T14:52:07Z) - Estimating 3D Uncertainty Field: Quantifying Uncertainty for Neural
Radiance Fields [25.300284510832974]
We propose a novel approach to estimate a 3D Uncertainty Field based on the learned incomplete scene geometry.
By considering the accumulated transmittance along each camera ray, our Uncertainty Field infers 2D pixel-wise uncertainty.
Our experiments demonstrate that our approach is the only one that can explicitly reason about high uncertainty both on 3D unseen regions and its involved 2D rendered pixels.
arXiv Detail & Related papers (2023-11-03T09:47:53Z) - GUPNet++: Geometry Uncertainty Propagation Network for Monocular 3D Object Detection [92.41859045360532]
We propose a novel Geometry Uncertainty Propagation Network (GUPNet++)<n>It models the uncertainty propagation relationship of the geometry projection during training, improving the stability and efficiency of the end-to-end model learning.<n> Experiments show that the proposed approach not only obtains (state-of-the-art) SOTA performance in image-based monocular 3D detection but also demonstrates superiority in efficacy with a simplified framework.
arXiv Detail & Related papers (2023-10-24T08:45:15Z) - NIKI: Neural Inverse Kinematics with Invertible Neural Networks for 3D
Human Pose and Shape Estimation [53.25973084799954]
We present NIKI (Neural Inverse Kinematics with Invertible Neural Network), which models bi-directional errors.
NIKI can learn from both the forward and inverse processes with invertible networks.
arXiv Detail & Related papers (2023-05-15T12:13:24Z) - Strategic Geosteeering Workflow with Uncertainty Quantification and Deep
Learning: A Case Study on the Goliat Field [0.0]
This paper presents a practical workflow consisting of offline and online phases.
The offline phase includes training and building of an uncertain prior near-well geo-model.
The online phase uses the flexible iterative ensemble smoother (FlexIES) to perform real-time assimilation of extra-deep electromagnetic data.
arXiv Detail & Related papers (2022-10-27T15:38:26Z) - {\phi}-SfT: Shape-from-Template with a Physics-Based Deformation Model [69.27632025495512]
Shape-from-Template (SfT) methods estimate 3D surface deformations from a single monocular RGB camera.
This paper proposes a new SfT approach explaining 2D observations through physical simulations.
arXiv Detail & Related papers (2022-03-22T17:59:57Z) - Stochastic Neural Radiance Fields:Quantifying Uncertainty in Implicit 3D
Representations [19.6329380710514]
Uncertainty quantification is a long-standing problem in Machine Learning.
We propose Neural Radiance Fields (S-NeRF), a generalization of standard NeRF that learns a probability distribution over all the possible fields modeling the scene.
S-NeRF is able to provide more reliable predictions and confidence values than generic approaches previously proposed for uncertainty estimation in other domains.
arXiv Detail & Related papers (2021-09-05T16:56:43Z) - Variational State-Space Models for Localisation and Dense 3D Mapping in
6 DoF [17.698319441265223]
We solve the problem of 6-DoF localisation and 3D dense reconstruction in spatial environments as approximate Bayesian inference in a deep state-space model.
This results in an expressive predictive model of the world, often missing in current state-of-the-art visual SLAM solutions.
We evaluate our approach on realistic unmanned aerial vehicle flight data, nearing the performance of state-of-the-art visual-inertial odometry systems.
arXiv Detail & Related papers (2020-06-17T22:06:35Z) - Semi-supervised deep learning for high-dimensional uncertainty
quantification [6.910275451003041]
This paper presents a semi-supervised learning framework for dimension reduction and reliability analysis.
An autoencoder is first adopted for mapping the high-dimensional space into a low-dimensional latent space.
A deep feedforward neural network is utilized to learn the mapping relationship and reconstruct the latent space.
arXiv Detail & Related papers (2020-06-01T15:15:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.