Transferable Calibration with Lower Bias and Variance in Domain
Adaptation
- URL: http://arxiv.org/abs/2007.08259v2
- Date: Mon, 9 Nov 2020 11:00:52 GMT
- Title: Transferable Calibration with Lower Bias and Variance in Domain
Adaptation
- Authors: Ximei Wang, Mingsheng Long, Jianmin Wang, and Michael I. Jordan
- Abstract summary: Domain Adaptation (DA) enables transferring a learning machine from a labeled source domain to an unlabeled target one.
How to estimate the predictive uncertainty of DA models is vital for decision-making in safety-critical scenarios.
TransCal can be easily applied to recalibrate existing DA methods.
- Score: 139.4332115349543
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain Adaptation (DA) enables transferring a learning machine from a labeled
source domain to an unlabeled target one. While remarkable advances have been
made, most of the existing DA methods focus on improving the target accuracy at
inference. How to estimate the predictive uncertainty of DA models is vital for
decision-making in safety-critical scenarios but remains the boundary to
explore. In this paper, we delve into the open problem of Calibration in DA,
which is extremely challenging due to the coexistence of domain shift and the
lack of target labels. We first reveal the dilemma that DA models learn higher
accuracy at the expense of well-calibrated probabilities. Driven by this
finding, we propose Transferable Calibration (TransCal) to achieve more
accurate calibration with lower bias and variance in a unified
hyperparameter-free optimization framework. As a general post-hoc calibration
method, TransCal can be easily applied to recalibrate existing DA methods. Its
efficacy has been justified both theoretically and empirically.
Related papers
- Cal-SFDA: Source-Free Domain-adaptive Semantic Segmentation with
Differentiable Expected Calibration Error [50.86671887712424]
The prevalence of domain adaptive semantic segmentation has prompted concerns regarding source domain data leakage.
To circumvent the requirement for source data, source-free domain adaptation has emerged as a viable solution.
We propose a novel calibration-guided source-free domain adaptive semantic segmentation framework.
arXiv Detail & Related papers (2023-08-06T03:28:34Z) - PseudoCal: A Source-Free Approach to Unsupervised Uncertainty
Calibration in Domain Adaptation [87.69789891809562]
Unsupervised domain adaptation (UDA) has witnessed remarkable advancements in improving the accuracy of models for unlabeled target domains.
The calibration of predictive uncertainty in the target domain, a crucial aspect of the safe deployment of UDA models, has received limited attention.
We propose PseudoCal, a source-free calibration method that exclusively relies on unlabeled target data.
arXiv Detail & Related papers (2023-07-14T17:21:41Z) - Beyond In-Domain Scenarios: Robust Density-Aware Calibration [48.00374886504513]
Calibrating deep learning models to yield uncertainty-aware predictions is crucial as deep neural networks get increasingly deployed in safety-critical applications.
We propose DAC, an accuracy-preserving as well as Density-Aware method based on k-nearest-neighbors (KNN)
We show that DAC boosts the robustness of calibration performance in domain-shift and OOD, while maintaining excellent in-domain predictive uncertainty estimates.
arXiv Detail & Related papers (2023-02-10T08:48:32Z) - Learning Unbiased Transferability for Domain Adaptation by Uncertainty
Modeling [107.24387363079629]
Domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled or a less labeled but related target domain.
Due to the imbalance between the amount of annotated data in the source and target domains, only the target distribution is aligned to the source domain.
We propose a non-intrusive Unbiased Transferability Estimation Plug-in (UTEP) by modeling the uncertainty of a discriminator in adversarial-based DA methods to optimize unbiased transfer.
arXiv Detail & Related papers (2022-06-02T21:58:54Z) - Post-hoc Uncertainty Calibration for Domain Drift Scenarios [46.88826364244423]
We show that existing post-hoc calibration methods yield highly over-confident predictions under domain shift.
We introduce a simple strategy where perturbations are applied to samples in the validation set before performing the post-hoc calibration step.
arXiv Detail & Related papers (2020-12-20T18:21:13Z) - Improving model calibration with accuracy versus uncertainty
optimization [17.056768055368384]
A well-calibrated model should be accurate when it is certain about its prediction and indicate high uncertainty when it is likely to be inaccurate.
We propose an optimization method that leverages the relationship between accuracy and uncertainty as an anchor for uncertainty calibration.
We demonstrate our approach with mean-field variational inference and compare with state-of-the-art methods.
arXiv Detail & Related papers (2020-12-14T20:19:21Z) - Unsupervised Calibration under Covariate Shift [92.02278658443166]
We introduce the problem of calibration under domain shift and propose an importance sampling based approach to address it.
We evaluate and discuss the efficacy of our method on both real-world datasets and synthetic datasets.
arXiv Detail & Related papers (2020-06-29T21:50:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.