Stochastic Neural Radiance Fields:Quantifying Uncertainty in Implicit 3D
Representations
- URL: http://arxiv.org/abs/2109.02123v1
- Date: Sun, 5 Sep 2021 16:56:43 GMT
- Title: Stochastic Neural Radiance Fields:Quantifying Uncertainty in Implicit 3D
Representations
- Authors: Jianxiong Shen, Adria Ruiz, Antonio Agudo, Francesc Moreno
- Abstract summary: Uncertainty quantification is a long-standing problem in Machine Learning.
We propose Neural Radiance Fields (S-NeRF), a generalization of standard NeRF that learns a probability distribution over all the possible fields modeling the scene.
S-NeRF is able to provide more reliable predictions and confidence values than generic approaches previously proposed for uncertainty estimation in other domains.
- Score: 19.6329380710514
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Radiance Fields (NeRF) has become a popular framework for learning
implicit 3D representations and addressing different tasks such as novel-view
synthesis or depth-map estimation. However, in downstream applications where
decisions need to be made based on automatic predictions, it is critical to
leverage the confidence associated with the model estimations. Whereas
uncertainty quantification is a long-standing problem in Machine Learning, it
has been largely overlooked in the recent NeRF literature. In this context, we
propose Stochastic Neural Radiance Fields (S-NeRF), a generalization of
standard NeRF that learns a probability distribution over all the possible
radiance fields modeling the scene. This distribution allows to quantify the
uncertainty associated with the scene information provided by the model. S-NeRF
optimization is posed as a Bayesian learning problem which is efficiently
addressed using the Variational Inference framework. Exhaustive experiments
over benchmark datasets demonstrate that S-NeRF is able to provide more
reliable predictions and confidence values than generic approaches previously
proposed for uncertainty estimation in other domains.
Related papers
- ProvNeRF: Modeling per Point Provenance in NeRFs as a Stochastic Field [52.09661042881063]
We propose an approach that models the bfprovenance for each point -- i.e., the locations where it is likely visible -- of NeRFs as a text field.
We show that modeling per-point provenance during the NeRF optimization enriches the model with information on leading to improvements in novel view synthesis and uncertainty estimation.
arXiv Detail & Related papers (2024-01-16T06:19:18Z) - Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Neural State-Space Models: Empirical Evaluation of Uncertainty
Quantification [0.0]
This paper presents preliminary results on uncertainty quantification for system identification with neural state-space models.
We frame the learning problem in a Bayesian probabilistic setting and obtain posterior distributions for the neural network's weights and outputs.
Based on the posterior, we construct credible intervals on the outputs and define a surprise index which can effectively diagnose usage of the model in a potentially dangerous out-of-distribution regime.
arXiv Detail & Related papers (2023-04-13T08:57:33Z) - A General Framework for quantifying Aleatoric and Epistemic uncertainty
in Graph Neural Networks [0.29494468099506893]
Graph Neural Networks (GNN) provide a powerful framework that elegantly integrates Graph theory with Machine learning.
We consider the problem of quantifying the uncertainty in predictions of GNN stemming from modeling errors and measurement uncertainty.
We propose a unified approach to treat both sources of uncertainty in a Bayesian framework.
arXiv Detail & Related papers (2022-05-20T05:25:40Z) - Conditional-Flow NeRF: Accurate 3D Modelling with Reliable Uncertainty
Quantification [44.598503284186336]
Conditional-Flow NeRF (CF-NeRF) is a novel probabilistic framework to incorporate uncertainty quantification into NeRF-based approaches.
CF-NeRF learns a distribution over all possible radiance fields modelling which is used to quantify the uncertainty associated with the modelled scene.
arXiv Detail & Related papers (2022-03-18T23:26:20Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - A Kernel Framework to Quantify a Model's Local Predictive Uncertainty
under Data Distributional Shifts [21.591460685054546]
Internal layer outputs of a trained neural network contain all of the information related to both its mapping function and its input data distribution.
We propose a framework for predictive uncertainty quantification of a trained neural network that explicitly estimates the PDF of its raw prediction space.
The kernel framework is observed to provide model uncertainty estimates with much greater precision based on the ability to detect model prediction errors.
arXiv Detail & Related papers (2021-03-02T00:31:53Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.