Improving Deterministic Uncertainty Estimation in Deep Learning for
Classification and Regression
- URL: http://arxiv.org/abs/2102.11409v1
- Date: Mon, 22 Feb 2021 23:29:12 GMT
- Title: Improving Deterministic Uncertainty Estimation in Deep Learning for
Classification and Regression
- Authors: Joost van Amersfoort, Lewis Smith, Andrew Jesson, Oscar Key, Yarin Gal
- Abstract summary: We propose a new model that estimates uncertainty in a single forward pass.
Our approach combines a bi-Lipschitz feature extractor with an inducing point approximate Gaussian process, offering robust and principled uncertainty estimation.
- Score: 30.112634874443494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new model that estimates uncertainty in a single forward pass
and works on both classification and regression problems. Our approach combines
a bi-Lipschitz feature extractor with an inducing point approximate Gaussian
process, offering robust and principled uncertainty estimation. This can be
seen as a refinement of Deep Kernel Learning (DKL), with our changes allowing
DKL to match softmax neural networks accuracy. Our method overcomes the
limitations of previous work addressing deterministic uncertainty
quantification, such as the dependence of uncertainty on ad hoc
hyper-parameters. Our method matches SotA accuracy, 96.2% on CIFAR-10, while
maintaining the speed of softmax models, and provides uncertainty estimates
that outperform previous single forward pass uncertainty models. Finally, we
demonstrate our method on a recently introduced benchmark for uncertainty in
regression: treatment deferral in causal models for personalized medicine.
Related papers
- Achieving Well-Informed Decision-Making in Drug Discovery: A Comprehensive Calibration Study using Neural Network-Based Structure-Activity Models [4.619907534483781]
computational models that predict drug-target interactions are valuable tools to accelerate the development of new therapeutic agents.
However, such models can be poorly calibrated, which results in unreliable uncertainty estimates.
We show that combining post hoc calibration method with well-performing uncertainty quantification approaches can boost model accuracy and calibration.
arXiv Detail & Related papers (2024-07-19T10:29:00Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Uncertainty Estimation for Safety-critical Scene Segmentation via
Fine-grained Reward Maximization [12.79542334840646]
Uncertainty estimation plays an important role for future reliable deployment of deep segmentation models in safety-critical scenarios.
We propose a novel fine-grained reward (FGRM) framework to address uncertainty estimation.
Our method outperforms state-of-the-art methods by a clear margin on all the calibration metrics of uncertainty estimation.
arXiv Detail & Related papers (2023-11-05T17:43:37Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - On Calibrated Model Uncertainty in Deep Learning [0.0]
We extend the approximate inference for the loss-calibrated Bayesian framework to dropweights based Bayesian neural networks.
We show that decisions informed by loss-calibrated uncertainty can improve diagnostic performance to a greater extent than straightforward alternatives.
arXiv Detail & Related papers (2022-06-15T20:16:32Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Diffusion Tensor Estimation with Uncertainty Calibration [6.5085381751712506]
We propose a deep learning method to estimate the diffusion tensor and compute the estimation uncertainty.
Data-dependent uncertainty is computed directly by the network and learned via loss attenuation.
We show that the estimation uncertainties computed by the new method can highlight the model's biases, detect domain shift, and reflect the strength of noise in the measurements.
arXiv Detail & Related papers (2021-11-21T15:58:01Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Adversarial Attack for Uncertainty Estimation: Identifying Critical
Regions in Neural Networks [0.0]
We propose a novel method to capture data points near decision boundary in neural network that are often referred to a specific type of uncertainty.
Uncertainty estimates are derived from the input perturbations, unlike previous studies that provide perturbations on the model's parameters.
We show that the proposed method has revealed a significant outperformance over other methods and provided less risk to capture model uncertainty in machine learning.
arXiv Detail & Related papers (2021-07-15T21:30:26Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.