Density-aware NeRF Ensembles: Quantifying Predictive Uncertainty in
Neural Radiance Fields
- URL: http://arxiv.org/abs/2209.08718v1
- Date: Mon, 19 Sep 2022 02:28:33 GMT
- Title: Density-aware NeRF Ensembles: Quantifying Predictive Uncertainty in
Neural Radiance Fields
- Authors: Niko S\"underhauf, Jad Abou-Chakra, Dimity Miller
- Abstract summary: We show that ensembling effectively quantifies model uncertainty in Neural Radiance Fields (NeRFs)
We demonstrate that NeRF uncertainty can be utilised for next-best view selection and model refinement.
- Score: 7.380217868660371
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We show that ensembling effectively quantifies model uncertainty in Neural
Radiance Fields (NeRFs) if a density-aware epistemic uncertainty term is
considered. The naive ensembles investigated in prior work simply average
rendered RGB images to quantify the model uncertainty caused by conflicting
explanations of the observed scene. In contrast, we additionally consider the
termination probabilities along individual rays to identify epistemic model
uncertainty due to a lack of knowledge about the parts of a scene unobserved
during training. We achieve new state-of-the-art performance across established
uncertainty quantification benchmarks for NeRFs, outperforming methods that
require complex changes to the NeRF architecture and training regime. We
furthermore demonstrate that NeRF uncertainty can be utilised for next-best
view selection and model refinement.
Related papers
- LoGU: Long-form Generation with Uncertainty Expressions [49.76417603761989]
We introduce the task of Long-form Generation with Uncertainty(LoGU)
We identify two key challenges: Uncertainty Suppression and Uncertainty Misalignment.
Our framework adopts a divide-and-conquer strategy, refining uncertainty based on atomic claims.
Experiments on three long-form instruction following datasets show that our method significantly improves accuracy, reduces hallucinations, and maintains the comprehensiveness of responses.
arXiv Detail & Related papers (2024-10-18T09:15:35Z) - OPONeRF: One-Point-One NeRF for Robust Neural Rendering [70.56874833759241]
We propose a One-Point-One NeRF (OPONeRF) framework for robust scene rendering.
Small but unpredictable perturbations such as object movements, light changes and data contaminations broadly exist in real-life 3D scenes.
Experimental results show that our OPONeRF outperforms state-of-the-art NeRFs on various evaluation metrics.
arXiv Detail & Related papers (2024-09-30T07:49:30Z) - Taming Uncertainty in Sparse-view Generalizable NeRF via Indirect
Diffusion Guidance [13.006310342461354]
Generalizable NeRFs (Gen-NeRF) often produce blurring artifacts in unobserved regions with sparse inputs, which are full of uncertainty.
We propose an Indirect Diffusion-guided NeRF framework, termed ID-NeRF, to address this uncertainty from a generative perspective.
arXiv Detail & Related papers (2024-02-02T08:39:51Z) - Instant Uncertainty Calibration of NeRFs Using a Meta-calibrator [60.47106421809998]
We introduce the concept of a meta-calibrator that performs uncertainty calibration for NeRFs with a single forward pass.
We show that the meta-calibrator can generalize on unseen scenes and achieves well-calibrated and state-of-the-art uncertainty for NeRFs.
arXiv Detail & Related papers (2023-12-04T21:29:31Z) - Measuring and Modeling Uncertainty Degree for Monocular Depth Estimation [50.920911532133154]
The intrinsic ill-posedness and ordinal-sensitive nature of monocular depth estimation (MDE) models pose major challenges to the estimation of uncertainty degree.
We propose to model the uncertainty of MDE models from the perspective of the inherent probability distributions.
By simply introducing additional training regularization terms, our model, with surprisingly simple formations and without requiring extra modules or multiple inferences, can provide uncertainty estimations with state-of-the-art reliability.
arXiv Detail & Related papers (2023-07-19T12:11:15Z) - A General Framework for quantifying Aleatoric and Epistemic uncertainty
in Graph Neural Networks [0.29494468099506893]
Graph Neural Networks (GNN) provide a powerful framework that elegantly integrates Graph theory with Machine learning.
We consider the problem of quantifying the uncertainty in predictions of GNN stemming from modeling errors and measurement uncertainty.
We propose a unified approach to treat both sources of uncertainty in a Bayesian framework.
arXiv Detail & Related papers (2022-05-20T05:25:40Z) - Conditional-Flow NeRF: Accurate 3D Modelling with Reliable Uncertainty
Quantification [44.598503284186336]
Conditional-Flow NeRF (CF-NeRF) is a novel probabilistic framework to incorporate uncertainty quantification into NeRF-based approaches.
CF-NeRF learns a distribution over all possible radiance fields modelling which is used to quantify the uncertainty associated with the modelled scene.
arXiv Detail & Related papers (2022-03-18T23:26:20Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Stochastic Neural Radiance Fields:Quantifying Uncertainty in Implicit 3D
Representations [19.6329380710514]
Uncertainty quantification is a long-standing problem in Machine Learning.
We propose Neural Radiance Fields (S-NeRF), a generalization of standard NeRF that learns a probability distribution over all the possible fields modeling the scene.
S-NeRF is able to provide more reliable predictions and confidence values than generic approaches previously proposed for uncertainty estimation in other domains.
arXiv Detail & Related papers (2021-09-05T16:56:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.