Unsupervised Calibration under Covariate Shift
- URL: http://arxiv.org/abs/2006.16405v1
- Date: Mon, 29 Jun 2020 21:50:07 GMT
- Title: Unsupervised Calibration under Covariate Shift
- Authors: Anusri Pampari and Stefano Ermon
- Abstract summary: We introduce the problem of calibration under domain shift and propose an importance sampling based approach to address it.
We evaluate and discuss the efficacy of our method on both real-world datasets and synthetic datasets.
- Score: 92.02278658443166
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A probabilistic model is said to be calibrated if its predicted probabilities
match the corresponding empirical frequencies. Calibration is important for
uncertainty quantification and decision making in safety-critical applications.
While calibration of classifiers has been widely studied, we find that
calibration is brittle and can be easily lost under minimal covariate shifts.
Existing techniques, including domain adaptation ones, primarily focus on
prediction accuracy and do not guarantee calibration neither in theory nor in
practice. In this work, we formally introduce the problem of calibration under
domain shift, and propose an importance sampling based approach to address it.
We evaluate and discuss the efficacy of our method on both real-world datasets
and synthetic datasets.
Related papers
- Towards Certification of Uncertainty Calibration under Adversarial Attacks [96.48317453951418]
We show that attacks can significantly harm calibration, and thus propose certified calibration as worst-case bounds on calibration under adversarial perturbations.
We propose novel calibration attacks and demonstrate how they can improve model calibration through textitadversarial calibration training
arXiv Detail & Related papers (2024-05-22T18:52:09Z) - Calibration by Distribution Matching: Trainable Kernel Calibration
Metrics [56.629245030893685]
We introduce kernel-based calibration metrics that unify and generalize popular forms of calibration for both classification and regression.
These metrics admit differentiable sample estimates, making it easy to incorporate a calibration objective into empirical risk minimization.
We provide intuitive mechanisms to tailor calibration metrics to a decision task, and enforce accurate loss estimation and no regret decisions.
arXiv Detail & Related papers (2023-10-31T06:19:40Z) - Causal isotonic calibration for heterogeneous treatment effects [0.5249805590164901]
We propose causal isotonic calibration, a novel nonparametric method for calibrating predictors of heterogeneous treatment effects.
We also introduce cross-calibration, a data-efficient variant of calibration that eliminates the need for hold-out calibration sets.
arXiv Detail & Related papers (2023-02-27T18:07:49Z) - On Calibrating Semantic Segmentation Models: Analyses and An Algorithm [51.85289816613351]
We study the problem of semantic segmentation calibration.
Model capacity, crop size, multi-scale testing, and prediction correctness have impact on calibration.
We propose a simple, unifying, and effective approach, namely selective scaling.
arXiv Detail & Related papers (2022-12-22T22:05:16Z) - T-Cal: An optimal test for the calibration of predictive models [49.11538724574202]
We consider detecting mis-calibration of predictive models using a finite validation dataset as a hypothesis testing problem.
detecting mis-calibration is only possible when the conditional probabilities of the classes are sufficiently smooth functions of the predictions.
We propose T-Cal, a minimax test for calibration based on a de-biased plug-in estimator of the $ell$-Expected Error (ECE)
arXiv Detail & Related papers (2022-03-03T16:58:54Z) - Post-hoc Uncertainty Calibration for Domain Drift Scenarios [46.88826364244423]
We show that existing post-hoc calibration methods yield highly over-confident predictions under domain shift.
We introduce a simple strategy where perturbations are applied to samples in the validation set before performing the post-hoc calibration step.
arXiv Detail & Related papers (2020-12-20T18:21:13Z) - Improving model calibration with accuracy versus uncertainty
optimization [17.056768055368384]
A well-calibrated model should be accurate when it is certain about its prediction and indicate high uncertainty when it is likely to be inaccurate.
We propose an optimization method that leverages the relationship between accuracy and uncertainty as an anchor for uncertainty calibration.
We demonstrate our approach with mean-field variational inference and compare with state-of-the-art methods.
arXiv Detail & Related papers (2020-12-14T20:19:21Z) - Transferable Calibration with Lower Bias and Variance in Domain
Adaptation [139.4332115349543]
Domain Adaptation (DA) enables transferring a learning machine from a labeled source domain to an unlabeled target one.
How to estimate the predictive uncertainty of DA models is vital for decision-making in safety-critical scenarios.
TransCal can be easily applied to recalibrate existing DA methods.
arXiv Detail & Related papers (2020-07-16T11:09:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.