$\Delta$-UQ: Accurate Uncertainty Quantification via Anchor
Marginalization
- URL: http://arxiv.org/abs/2110.02197v1
- Date: Tue, 5 Oct 2021 17:44:31 GMT
- Title: $\Delta$-UQ: Accurate Uncertainty Quantification via Anchor
Marginalization
- Authors: Rushil Anirudh and Jayaraman J. Thiagarajan
- Abstract summary: We present $Delta$UQ -- a novel, general-purpose uncertainty estimator using the concept of anchoring in predictive models.
We find this uncertainty is deeply connected to improper sampling of the input data, and inherent noise, enabling us to estimate the total uncertainty in any system.
- Score: 40.581619201120716
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present $\Delta$-UQ -- a novel, general-purpose uncertainty estimator
using the concept of anchoring in predictive models. Anchoring works by first
transforming the input into a tuple consisting of an anchor point drawn from a
prior distribution, and a combination of the input sample with the anchor using
a pretext encoding scheme. This encoding is such that the original input can be
perfectly recovered from the tuple -- regardless of the choice of the anchor.
Therefore, any predictive model should be able to predict the target response
from the tuple alone (since it implicitly represents the input). Moreover, by
varying the anchors for a fixed sample, we can estimate uncertainty in the
prediction even using only a single predictive model. We find this uncertainty
is deeply connected to improper sampling of the input data, and inherent noise,
enabling us to estimate the total uncertainty in any system. With extensive
empirical studies on a variety of use-cases, we demonstrate that $\Delta$-UQ
outperforms several competitive baselines. Specifically, we study model
fitting, sequential model optimization, model based inversion in the regression
setting and out of distribution detection, & calibration under distribution
shifts for classification.
Related papers
- Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Conformalization of Sparse Generalized Linear Models [2.1485350418225244]
Conformal prediction method estimates a confidence set for $y_n+1$ that is valid for any finite sample size.
Although attractive, computing such a set is computationally infeasible in most regression problems.
We show how our path-following algorithm accurately approximates conformal prediction sets.
arXiv Detail & Related papers (2023-07-11T08:36:12Z) - A probabilistic, data-driven closure model for RANS simulations with aleatoric, model uncertainty [1.8416014644193066]
We propose a data-driven, closure model for Reynolds-averaged Navier-Stokes (RANS) simulations that incorporates aleatoric, model uncertainty.
A fully Bayesian formulation is proposed, combined with a sparsity-inducing prior in order to identify regions in the problem domain where the parametric closure is insufficient.
arXiv Detail & Related papers (2023-07-05T16:53:31Z) - Robust Outlier Rejection for 3D Registration with Variational Bayes [70.98659381852787]
We develop a novel variational non-local network-based outlier rejection framework for robust alignment.
We propose a voting-based inlier searching strategy to cluster the high-quality hypothetical inliers for transformation estimation.
arXiv Detail & Related papers (2023-04-04T03:48:56Z) - Estimating Regression Predictive Distributions with Sample Networks [17.935136717050543]
A common approach to model uncertainty is to choose a parametric distribution and fit the data to it using maximum likelihood estimation.
The chosen parametric form can be a poor fit to the data-generating distribution, resulting in unreliable uncertainty estimates.
We propose SampleNet, a flexible and scalable architecture for modeling uncertainty that avoids specifying a parametric form on the output distribution.
arXiv Detail & Related papers (2022-11-24T17:23:29Z) - Anomaly Attribution with Likelihood Compensation [14.99385222547436]
This paper addresses the task of explaining anomalous predictions of a black-box regression model.
Given model deviation from the expected value, infer the responsibility score of each of the input variables.
To the best of our knowledge, this is the first principled framework that computes a responsibility score for real valued anomalous model deviations.
arXiv Detail & Related papers (2022-08-23T02:00:20Z) - Training on Test Data with Bayesian Adaptation for Covariate Shift [96.3250517412545]
Deep neural networks often make inaccurate predictions with unreliable uncertainty estimates.
We derive a Bayesian model that provides for a well-defined relationship between unlabeled inputs under distributional shift and model parameters.
We show that our method improves both accuracy and uncertainty estimation.
arXiv Detail & Related papers (2021-09-27T01:09:08Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Probabilistic Anchor Assignment with IoU Prediction for Object Detection [9.703212439661097]
In object detection, determining which anchors to assign as positive or negative samples, known as anchor assignment, has been revealed as a core procedure that can significantly affect a model's performance.
We propose a novel anchor assignment strategy that adaptively separates anchors into positive and negative samples for a ground truth bounding box according to the model's learning status.
arXiv Detail & Related papers (2020-07-16T04:26:57Z) - SUMO: Unbiased Estimation of Log Marginal Probability for Latent
Variable Models [80.22609163316459]
We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series.
We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost.
arXiv Detail & Related papers (2020-04-01T11:49:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.