Improving model calibration with accuracy versus uncertainty
optimization
- URL: http://arxiv.org/abs/2012.07923v1
- Date: Mon, 14 Dec 2020 20:19:21 GMT
- Title: Improving model calibration with accuracy versus uncertainty
optimization
- Authors: Ranganath Krishnan, Omesh Tickoo
- Abstract summary: A well-calibrated model should be accurate when it is certain about its prediction and indicate high uncertainty when it is likely to be inaccurate.
We propose an optimization method that leverages the relationship between accuracy and uncertainty as an anchor for uncertainty calibration.
We demonstrate our approach with mean-field variational inference and compare with state-of-the-art methods.
- Score: 17.056768055368384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Obtaining reliable and accurate quantification of uncertainty estimates from
deep neural networks is important in safety-critical applications. A
well-calibrated model should be accurate when it is certain about its
prediction and indicate high uncertainty when it is likely to be inaccurate.
Uncertainty calibration is a challenging problem as there is no ground truth
available for uncertainty estimates. We propose an optimization method that
leverages the relationship between accuracy and uncertainty as an anchor for
uncertainty calibration. We introduce a differentiable accuracy versus
uncertainty calibration (AvUC) loss function that allows a model to learn to
provide well-calibrated uncertainties, in addition to improved accuracy. We
also demonstrate the same methodology can be extended to post-hoc uncertainty
calibration on pretrained models. We illustrate our approach with mean-field
stochastic variational inference and compare with state-of-the-art methods.
Extensive experiments demonstrate our approach yields better model calibration
than existing methods on large-scale image classification tasks under
distributional shift.
Related papers
- Calibrated Uncertainty Quantification for Operator Learning via
Conformal Prediction [95.75771195913046]
We propose a risk-controlling quantile neural operator, a distribution-free, finite-sample functional calibration conformal prediction method.
We provide a theoretical calibration guarantee on the coverage rate, defined as the expected percentage of points on the function domain.
Empirical results on a 2D Darcy flow and a 3D car surface pressure prediction task validate our theoretical results.
arXiv Detail & Related papers (2024-02-02T23:43:28Z) - Two Sides of Miscalibration: Identifying Over and Under-Confidence
Prediction for Network Calibration [1.192436948211501]
Proper confidence calibration of deep neural networks is essential for reliable predictions in safety-critical tasks.
Miscalibration can lead to model over-confidence and/or under-confidence.
We introduce a novel metric, a miscalibration score, to identify the overall and class-wise calibration status.
We use the class-wise miscalibration score as a proxy to design a calibration technique that can tackle both over and under-confidence.
arXiv Detail & Related papers (2023-08-06T17:59:14Z) - Calibration of Neural Networks [77.34726150561087]
This paper presents a survey of confidence calibration problems in the context of neural networks.
We analyze problem statement, calibration definitions, and different approaches to evaluation.
Empirical experiments cover various datasets and models, comparing calibration methods according to different criteria.
arXiv Detail & Related papers (2023-03-19T20:27:51Z) - Reliable Multimodal Trajectory Prediction via Error Aligned Uncertainty
Optimization [11.456242421204298]
In a well-calibrated model, uncertainty estimates should perfectly correlate with model error.
We propose a novel error aligned uncertainty optimization method and introduce a trainable loss function to guide the models to yield good quality uncertainty estimates aligning with the model error.
We demonstrate that our method improves average displacement error by 1.69% and 4.69%, and the uncertainty correlation with model error by 17.22% and 19.13% as quantified by Pearson correlation coefficient on two state-of-the-art baselines.
arXiv Detail & Related papers (2022-12-09T12:33:26Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - On Calibrated Model Uncertainty in Deep Learning [0.0]
We extend the approximate inference for the loss-calibrated Bayesian framework to dropweights based Bayesian neural networks.
We show that decisions informed by loss-calibrated uncertainty can improve diagnostic performance to a greater extent than straightforward alternatives.
arXiv Detail & Related papers (2022-06-15T20:16:32Z) - Confidence Calibration for Intent Detection via Hyperspherical Space and
Rebalanced Accuracy-Uncertainty Loss [17.26964140836123]
In some scenarios, users do not only care about the accuracy but also the confidence of model.
We propose a model using the hyperspherical space and rebalanced accuracy-uncertainty loss.
Our model outperforms the existing calibration methods and achieves a significant improvement on the calibration metric.
arXiv Detail & Related papers (2022-03-17T12:01:33Z) - Transferable Calibration with Lower Bias and Variance in Domain
Adaptation [139.4332115349543]
Domain Adaptation (DA) enables transferring a learning machine from a labeled source domain to an unlabeled target one.
How to estimate the predictive uncertainty of DA models is vital for decision-making in safety-critical scenarios.
TransCal can be easily applied to recalibrate existing DA methods.
arXiv Detail & Related papers (2020-07-16T11:09:36Z) - Unsupervised Calibration under Covariate Shift [92.02278658443166]
We introduce the problem of calibration under domain shift and propose an importance sampling based approach to address it.
We evaluate and discuss the efficacy of our method on both real-world datasets and synthetic datasets.
arXiv Detail & Related papers (2020-06-29T21:50:07Z) - Calibration of Model Uncertainty for Dropout Variational Inference [1.8065361710947976]
In this paper, different logit scaling methods are extended to dropout variational inference to recalibrate model uncertainty.
Experimental results show that logit scaling considerably reduce miscalibration by means of UCE.
arXiv Detail & Related papers (2020-06-20T14:12:55Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.