Modeling uncertainty for Gaussian Splatting
- URL: http://arxiv.org/abs/2403.18476v1
- Date: Wed, 27 Mar 2024 11:45:08 GMT
- Title: Modeling uncertainty for Gaussian Splatting
- Authors: Luca Savant, Diego Valsesia, Enrico Magli,
- Abstract summary: We present the first framework for uncertainty estimation using Gaussian Splatting (GS)
We introduce a Variational Inference-based approach that seamlessly integrates uncertainty prediction into the common rendering pipeline of GS.
We also introduce the Area Under Sparsification Error (AUSE) as a new term in the loss function, enabling optimization of uncertainty estimation alongside image reconstruction.
- Score: 21.836830270709
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present Stochastic Gaussian Splatting (SGS): the first framework for uncertainty estimation using Gaussian Splatting (GS). GS recently advanced the novel-view synthesis field by achieving impressive reconstruction quality at a fraction of the computational cost of Neural Radiance Fields (NeRF). However, contrary to the latter, it still lacks the ability to provide information about the confidence associated with their outputs. To address this limitation, in this paper, we introduce a Variational Inference-based approach that seamlessly integrates uncertainty prediction into the common rendering pipeline of GS. Additionally, we introduce the Area Under Sparsification Error (AUSE) as a new term in the loss function, enabling optimization of uncertainty estimation alongside image reconstruction. Experimental results on the LLFF dataset demonstrate that our method outperforms existing approaches in terms of both image rendering quality and uncertainty estimation accuracy. Overall, our framework equips practitioners with valuable insights into the reliability of synthesized views, facilitating safer decision-making in real-world applications.
Related papers
- With or Without Replacement? Improving Confidence in Fourier Imaging [5.542462410129539]
We show how a transition between sampling with and without replacement can lead to a weighted reconstruction scheme with improved performance for the standard LASSO.
In this paper, we illustrate how this reweighted sampling idea can also improve the debiased estimator.
arXiv Detail & Related papers (2024-07-18T15:15:19Z) - Uncertainty-Aware Relational Graph Neural Network for Few-Shot Knowledge Graph Completion [12.887073684904147]
Few-shot knowledge graph completion (FKGC) aims to query the unseen facts of a relation given its few-shot reference entity pairs.
Existing FKGC works neglect such uncertainty, which leads them more susceptible to limited reference samples with noises.
We propose a novel uncertainty-aware few-shot KG completion framework (UFKGC) to model uncertainty for a better understanding of the limited data.
arXiv Detail & Related papers (2024-03-07T14:23:25Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy
Optimization [63.32053223422317]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
In particular, we focus on characterizing the variance over values induced by a distribution over MDPs.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Uncertainty Estimation for Safety-critical Scene Segmentation via
Fine-grained Reward Maximization [12.79542334840646]
Uncertainty estimation plays an important role for future reliable deployment of deep segmentation models in safety-critical scenarios.
We propose a novel fine-grained reward (FGRM) framework to address uncertainty estimation.
Our method outperforms state-of-the-art methods by a clear margin on all the calibration metrics of uncertainty estimation.
arXiv Detail & Related papers (2023-11-05T17:43:37Z) - Discretization-Induced Dirichlet Posterior for Robust Uncertainty
Quantification on Regression [17.49026509916207]
Uncertainty quantification is critical for deploying deep neural networks (DNNs) in real-world applications.
For vision regression tasks, current AuxUE designs are mainly adopted for aleatoric uncertainty estimates.
We propose a generalized AuxUE scheme for more robust uncertainty quantification on regression tasks.
arXiv Detail & Related papers (2023-08-17T15:54:11Z) - Model-Based Uncertainty in Value Functions [89.31922008981735]
We focus on characterizing the variance over values induced by a distribution over MDPs.
Previous work upper bounds the posterior variance over values by solving a so-called uncertainty Bellman equation.
We propose a new uncertainty Bellman equation whose solution converges to the true posterior variance over values.
arXiv Detail & Related papers (2023-02-24T09:18:27Z) - Benign Underfitting of Stochastic Gradient Descent [72.38051710389732]
We study to what extent may gradient descent (SGD) be understood as a "conventional" learning rule that achieves generalization performance by obtaining a good fit training data.
We analyze the closely related with-replacement SGD, for which an analogous phenomenon does not occur and prove that its population risk does in fact converge at the optimal rate.
arXiv Detail & Related papers (2022-02-27T13:25:01Z) - Stochastic Neural Radiance Fields:Quantifying Uncertainty in Implicit 3D
Representations [19.6329380710514]
Uncertainty quantification is a long-standing problem in Machine Learning.
We propose Neural Radiance Fields (S-NeRF), a generalization of standard NeRF that learns a probability distribution over all the possible fields modeling the scene.
S-NeRF is able to provide more reliable predictions and confidence values than generic approaches previously proposed for uncertainty estimation in other domains.
arXiv Detail & Related papers (2021-09-05T16:56:43Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks [65.24701908364383]
We show that a sufficient condition for a uncertainty on a ReLU network is "to be a bit Bayesian calibrated"
We further validate these findings empirically via various standard experiments using common deep ReLU networks and Laplace approximations.
arXiv Detail & Related papers (2020-02-24T08:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.