Calibrated Reliable Regression using Maximum Mean Discrepancy
- URL: http://arxiv.org/abs/2006.10255v2
- Date: Wed, 28 Oct 2020 03:38:24 GMT
- Title: Calibrated Reliable Regression using Maximum Mean Discrepancy
- Authors: Peng Cui, Wenbo Hu, Jun Zhu
- Abstract summary: Modern deep neural networks still produce unreliable predictive uncertainty.
In this paper, we are concerned with getting well-calibrated predictions in regression tasks.
Experiments on non-trivial real datasets show that our method can produce well-calibrated and sharp prediction intervals.
- Score: 45.45024203912822
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate quantification of uncertainty is crucial for real-world applications
of machine learning. However, modern deep neural networks still produce
unreliable predictive uncertainty, often yielding over-confident predictions.
In this paper, we are concerned with getting well-calibrated predictions in
regression tasks. We propose the calibrated regression method using the maximum
mean discrepancy by minimizing the kernel embedding measure. Theoretically, the
calibration error of our method asymptotically converges to zero when the
sample size is large enough. Experiments on non-trivial real datasets show that
our method can produce well-calibrated and sharp prediction intervals, which
outperforms the related state-of-the-art methods.
Related papers
- Sharp Calibrated Gaussian Processes [58.94710279601622]
State-of-the-art approaches for designing calibrated models rely on inflating the Gaussian process posterior variance.
We present a calibration approach that generates predictive quantiles using a computation inspired by the vanilla Gaussian process posterior variance.
Our approach is shown to yield a calibrated model under reasonable assumptions.
arXiv Detail & Related papers (2023-02-23T12:17:36Z) - On double-descent in uncertainty quantification in overparametrized
models [24.073221004661427]
Uncertainty quantification is a central challenge in reliable and trustworthy machine learning.
We show a trade-off between classification accuracy and calibration, unveiling a double descent like behavior in the calibration curve of optimally regularized estimators.
This is in contrast with the empirical Bayes method, which we show to be well calibrated in our setting despite the higher generalization error and overparametrization.
arXiv Detail & Related papers (2022-10-23T16:01:08Z) - On Calibrated Model Uncertainty in Deep Learning [0.0]
We extend the approximate inference for the loss-calibrated Bayesian framework to dropweights based Bayesian neural networks.
We show that decisions informed by loss-calibrated uncertainty can improve diagnostic performance to a greater extent than straightforward alternatives.
arXiv Detail & Related papers (2022-06-15T20:16:32Z) - T-Cal: An optimal test for the calibration of predictive models [49.11538724574202]
We consider detecting mis-calibration of predictive models using a finite validation dataset as a hypothesis testing problem.
detecting mis-calibration is only possible when the conditional probabilities of the classes are sufficiently smooth functions of the predictions.
We propose T-Cal, a minimax test for calibration based on a de-biased plug-in estimator of the $ell$-Expected Error (ECE)
arXiv Detail & Related papers (2022-03-03T16:58:54Z) - Recalibration of Aleatoric and Epistemic Regression Uncertainty in
Medical Imaging [2.126171264016785]
Well-calibrated uncertainty in regression allows robust rejection of unreliable predictions or detection of out-of-distribution samples.
$ sigma $ scaling is able to reliably recalibrate predictive uncertainty.
arXiv Detail & Related papers (2021-04-26T07:18:58Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Unsupervised Calibration under Covariate Shift [92.02278658443166]
We introduce the problem of calibration under domain shift and propose an importance sampling based approach to address it.
We evaluate and discuss the efficacy of our method on both real-world datasets and synthetic datasets.
arXiv Detail & Related papers (2020-06-29T21:50:07Z) - CRUDE: Calibrating Regression Uncertainty Distributions Empirically [4.552831400384914]
Calibrated uncertainty estimates in machine learning are crucial to many fields such as autonomous vehicles, medicine, and weather and climate forecasting.
We present a calibration method for regression settings that does not assume a particular uncertainty distribution over the error: Calibrating Regression Uncertainty Distributions Empirically (CRUDE)
CRUDE demonstrates consistently sharper, better calibrated, and more accurate uncertainty estimates than state-of-the-art techniques.
arXiv Detail & Related papers (2020-05-26T03:08:43Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.