Likelihood Annealing: Fast Calibrated Uncertainty for Regression
- URL: http://arxiv.org/abs/2302.11012v2
- Date: Sun, 2 Jul 2023 13:01:05 GMT
- Title: Likelihood Annealing: Fast Calibrated Uncertainty for Regression
- Authors: Uddeshya Upadhyay, Jae Myung Kim, Cordelia Schmidt, Bernhard
Sch\"olkopf, Zeynep Akata
- Abstract summary: This work presents a fast calibrated uncertainty estimation method for regression tasks called Likelihood Annealing.
Unlike previous methods for calibrated uncertainty in regression that focus only on low-dimensional regression problems, our method works well on a broad spectrum of regression problems, including high-dimensional regression.
- Score: 39.382916579076344
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in deep learning have shown that uncertainty estimation is
becoming increasingly important in applications such as medical imaging,
natural language processing, and autonomous systems. However, accurately
quantifying uncertainty remains a challenging problem, especially in regression
tasks where the output space is continuous. Deep learning approaches that allow
uncertainty estimation for regression problems often converge slowly and yield
poorly calibrated uncertainty estimates that can not be effectively used for
quantification. Recently proposed post hoc calibration techniques are seldom
applicable to regression problems and often add overhead to an already slow
model training phase. This work presents a fast calibrated uncertainty
estimation method for regression tasks called Likelihood Annealing, that
consistently improves the convergence of deep regression models and yields
calibrated uncertainty without any post hoc calibration phase. Unlike previous
methods for calibrated uncertainty in regression that focus only on
low-dimensional regression problems, our method works well on a broad spectrum
of regression problems, including high-dimensional regression.Our empirical
analysis shows that our approach is generalizable to various network
architectures, including multilayer perceptrons, 1D/2D convolutional networks,
and graph neural networks, on five vastly diverse tasks, i.e., chaotic particle
trajectory denoising, physical property prediction of molecules using 3D
atomistic representation, natural image super-resolution, and medical image
translation using MRI.
Related papers
- Non-Asymptotic Uncertainty Quantification in High-Dimensional Learning [5.318766629972959]
Uncertainty quantification is a crucial but challenging task in many high-dimensional regression or learning problems.
We develop a new data-driven approach for UQ in regression that applies both to classical regression approaches as well as to neural networks.
arXiv Detail & Related papers (2024-07-18T16:42:10Z) - HypUC: Hyperfine Uncertainty Calibration with Gradient-boosted
Corrections for Reliable Regression on Imbalanced Electrocardiograms [3.482894964998886]
We propose HypUC, a framework for imbalanced probabilistic regression in medical time series.
HypUC is evaluated on a large, diverse, real-world dataset of ECGs collected from millions of patients.
arXiv Detail & Related papers (2023-11-23T06:17:31Z) - Beta quantile regression for robust estimation of uncertainty in the
presence of outliers [1.6377726761463862]
Quantile Regression can be used to estimate aleatoric uncertainty in deep neural networks.
We propose a robust solution for quantile regression that incorporates concepts from robust divergence.
arXiv Detail & Related papers (2023-09-14T01:18:57Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Image-to-Image Regression with Distribution-Free Uncertainty
Quantification and Applications in Imaging [88.20869695803631]
We show how to derive uncertainty intervals around each pixel that are guaranteed to contain the true value.
We evaluate our procedure on three image-to-image regression tasks.
arXiv Detail & Related papers (2022-02-10T18:59:56Z) - Regression Bugs Are In Your Model! Measuring, Reducing and Analyzing
Regressions In NLP Model Updates [68.09049111171862]
This work focuses on quantifying, reducing and analyzing regression errors in the NLP model updates.
We formulate the regression-free model updates into a constrained optimization problem.
We empirically analyze how model ensemble reduces regression.
arXiv Detail & Related papers (2021-05-07T03:33:00Z) - Recalibration of Aleatoric and Epistemic Regression Uncertainty in
Medical Imaging [2.126171264016785]
Well-calibrated uncertainty in regression allows robust rejection of unreliable predictions or detection of out-of-distribution samples.
$ sigma $ scaling is able to reliably recalibrate predictive uncertainty.
arXiv Detail & Related papers (2021-04-26T07:18:58Z) - Bayesian Uncertainty Estimation of Learned Variational MRI
Reconstruction [63.202627467245584]
We introduce a Bayesian variational framework to quantify the model-immanent (epistemic) uncertainty.
We demonstrate that our approach yields competitive results for undersampled MRI reconstruction.
arXiv Detail & Related papers (2021-02-12T18:08:14Z) - Calibrated Reliable Regression using Maximum Mean Discrepancy [45.45024203912822]
Modern deep neural networks still produce unreliable predictive uncertainty.
In this paper, we are concerned with getting well-calibrated predictions in regression tasks.
Experiments on non-trivial real datasets show that our method can produce well-calibrated and sharp prediction intervals.
arXiv Detail & Related papers (2020-06-18T03:38:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.