Confidence Calibration for Domain Generalization under Covariate Shift
- URL: http://arxiv.org/abs/2104.00742v1
- Date: Thu, 1 Apr 2021 19:31:54 GMT
- Title: Confidence Calibration for Domain Generalization under Covariate Shift
- Authors: Yunye Gong, Xiao Lin, Yi Yao, Thomas G. Dietterich, Ajay Divakaran,
Melinda Gervasio
- Abstract summary: We present novel calibration solutions via domain generalization.
Our core idea is to leverage multiple calibration domains to reduce the effective distribution disparity between the target and calibration domains.
Compared against the state-of-the-art calibration methods designed for domain adaptation, we observe a decrease of 8.86 percentage points in expected calibration error.
- Score: 12.527429721643783
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing calibration algorithms address the problem of covariate shift via
unsupervised domain adaptation. However, these methods suffer from the
following limitations: 1) they require unlabeled data from the target domain,
which may not be available at the stage of calibration in real-world
applications and 2) their performances heavily depend on the disparity between
the distributions of the source and target domains. To address these two
limitations, we present novel calibration solutions via domain generalization
which, to the best of our knowledge, are the first of their kind. Our core idea
is to leverage multiple calibration domains to reduce the effective
distribution disparity between the target and calibration domains for improved
calibration transfer without needing any data from the target domain. We
provide theoretical justification and empirical experimental results to
demonstrate the effectiveness of our proposed algorithms. Compared against the
state-of-the-art calibration methods designed for domain adaptation, we observe
a decrease of 8.86 percentage points in expected calibration error,
equivalently an increase of 35 percentage points in improvement ratio, for
multi-class classification on the Office-Home dataset.
Related papers
- Continual Domain Adversarial Adaptation via Double-Head Discriminators [9.27879320502565]
Domain adversarial adaptation in a continual setting poses a significant challenge due to the limitations on accessing previous source domain data.
We propose a double-head discriminator algorithm, by introducing an addition source-only domain discriminator.
We prove that with the introduction of a pre-trained source-only domain discriminator, the empirical estimation error of $gH$-divergence related adversarial loss is reduced.
arXiv Detail & Related papers (2024-02-05T23:46:03Z) - Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - PseudoCal: A Source-Free Approach to Unsupervised Uncertainty
Calibration in Domain Adaptation [87.69789891809562]
Unsupervised domain adaptation (UDA) has witnessed remarkable advancements in improving the accuracy of models for unlabeled target domains.
The calibration of predictive uncertainty in the target domain, a crucial aspect of the safe deployment of UDA models, has received limited attention.
We propose PseudoCal, a source-free calibration method that exclusively relies on unlabeled target data.
arXiv Detail & Related papers (2023-07-14T17:21:41Z) - Post-hoc Uncertainty Calibration for Domain Drift Scenarios [46.88826364244423]
We show that existing post-hoc calibration methods yield highly over-confident predictions under domain shift.
We introduce a simple strategy where perturbations are applied to samples in the validation set before performing the post-hoc calibration step.
arXiv Detail & Related papers (2020-12-20T18:21:13Z) - Towards Fair Knowledge Transfer for Imbalanced Domain Adaptation [61.317911756566126]
We propose a Towards Fair Knowledge Transfer framework to handle the fairness challenge in imbalanced cross-domain learning.
Specifically, a novel cross-domain mixup generation is exploited to augment the minority source set with target information to enhance fairness.
Our model significantly improves over 20% on two benchmarks in terms of the overall accuracy.
arXiv Detail & Related papers (2020-10-23T06:29:09Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Privacy Preserving Recalibration under Domain Shift [119.21243107946555]
We introduce a framework that abstracts out the properties of recalibration problems under differential privacy constraints.
We also design a novel recalibration algorithm, accuracy temperature scaling, that outperforms prior work on private datasets.
arXiv Detail & Related papers (2020-08-21T18:43:37Z) - Transferable Calibration with Lower Bias and Variance in Domain
Adaptation [139.4332115349543]
Domain Adaptation (DA) enables transferring a learning machine from a labeled source domain to an unlabeled target one.
How to estimate the predictive uncertainty of DA models is vital for decision-making in safety-critical scenarios.
TransCal can be easily applied to recalibrate existing DA methods.
arXiv Detail & Related papers (2020-07-16T11:09:36Z) - Discriminative Feature Alignment: Improving Transferability of
Unsupervised Domain Adaptation by Gaussian-guided Latent Alignment [27.671964294233756]
In this study, we focus on the unsupervised domain adaptation problem where an approximate inference model is to be learned from a labeled data domain.
The success of unsupervised domain adaptation largely relies on the cross-domain feature alignment.
We introduce a Gaussian-guided latent alignment approach to align the latent feature distributions of the two domains under the guidance of the prior distribution.
In such an indirect way, the distributions over the samples from the two domains will be constructed on a common feature space, i.e., the space of the prior.
arXiv Detail & Related papers (2020-06-23T05:33:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.