Normalizing Flow Ensembles for Rich Aleatoric and Epistemic Uncertainty
Modeling
- URL: http://arxiv.org/abs/2302.01312v3
- Date: Tue, 3 Oct 2023 19:32:35 GMT
- Title: Normalizing Flow Ensembles for Rich Aleatoric and Epistemic Uncertainty
Modeling
- Authors: Lucas Berry and David Meger
- Abstract summary: We propose an ensemble of Normalizing Flows (NF) which are state-of-the-art in modeling aleatoric uncertainty.
The ensembles are created via sets of fixed dropout masks, making them less expensive than creating separate NF models.
We demonstrate how to leverage the unique structure of NFs, base distributions, to estimate aleatoric uncertainty without relying on samples.
- Score: 21.098866735156207
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we demonstrate how to reliably estimate epistemic uncertainty
while maintaining the flexibility needed to capture complicated aleatoric
distributions. To this end, we propose an ensemble of Normalizing Flows (NF),
which are state-of-the-art in modeling aleatoric uncertainty. The ensembles are
created via sets of fixed dropout masks, making them less expensive than
creating separate NF models. We demonstrate how to leverage the unique
structure of NFs, base distributions, to estimate aleatoric uncertainty without
relying on samples, provide a comprehensive set of baselines, and derive
unbiased estimates for differential entropy. The methods were applied to a
variety of experiments, commonly used to benchmark aleatoric and epistemic
uncertainty estimation: 1D sinusoidal data, 2D windy grid-world ($\it{Wet
Chicken}$), $\it{Pendulum}$, and $\it{Hopper}$. In these experiments, we setup
an active learning framework and evaluate each model's capability at measuring
aleatoric and epistemic uncertainty. The results show the advantages of using
NF ensembles in capturing complicated aleatoric while maintaining accurate
epistemic uncertainty estimates.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Uncertainty Estimation and Out-of-Distribution Detection for LiDAR Scene Semantic Segmentation [0.6144680854063939]
Safe navigation in new environments requires autonomous vehicles and robots to accurately interpret their surroundings.
We propose a method to distinguish in-distribution (ID) from out-of-distribution (OOD) samples.
We quantify both epistemic and aleatoric uncertainties using the feature space of a single deterministic model.
arXiv Detail & Related papers (2024-10-11T10:19:24Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Deep Evidential Learning for Bayesian Quantile Regression [3.6294895527930504]
It is desirable to have accurate uncertainty estimation from a single deterministic forward-pass model.
This paper proposes a deep Bayesian quantile regression model that can estimate the quantiles of a continuous target distribution without the Gaussian assumption.
arXiv Detail & Related papers (2023-08-21T11:42:16Z) - User-defined Event Sampling and Uncertainty Quantification in Diffusion
Models for Physical Dynamical Systems [49.75149094527068]
We show that diffusion models can be adapted to make predictions and provide uncertainty quantification for chaotic dynamical systems.
We develop a probabilistic approximation scheme for the conditional score function which converges to the true distribution as the noise level decreases.
We are able to sample conditionally on nonlinear userdefined events at inference time, and matches data statistics even when sampling from the tails of the distribution.
arXiv Detail & Related papers (2023-06-13T03:42:03Z) - Uncertainty Estimates of Predictions via a General Bias-Variance
Decomposition [7.811916700683125]
We introduce a bias-variance decomposition for proper scores, giving rise to the Bregman Information as the variance term.
We showcase the practical relevance of this decomposition on several downstream tasks, including model ensembles and confidence regions.
arXiv Detail & Related papers (2022-10-21T21:24:37Z) - Learning Multivariate CDFs and Copulas using Tensor Factorization [39.24470798045442]
Learning the multivariate distribution of data is a core challenge in statistics and machine learning.
In this work, we aim to learn multivariate cumulative distribution functions (CDFs), as they can handle mixed random variables.
We show that any grid sampled version of a joint CDF of mixed random variables admits a universal representation as a naive Bayes model.
We demonstrate the superior performance of the proposed model in several synthetic and real datasets and applications including regression, sampling and data imputation.
arXiv Detail & Related papers (2022-10-13T16:18:46Z) - A Deeper Look into Aleatoric and Epistemic Uncertainty Disentanglement [7.6146285961466]
In this paper, we generalize methods to produce disentangled uncertainties to work with different uncertainty quantification methods.
We show that there is an interaction between learning aleatoric and epistemic uncertainty, which is unexpected and violates assumptions on aleatoric uncertainty.
We expect that our formulation and results help practitioners and researchers choose uncertainty methods and expand the use of disentangled uncertainties.
arXiv Detail & Related papers (2022-04-20T08:41:37Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.