Dirichlet-based Uncertainty Calibration for Active Domain Adaptation
- URL: http://arxiv.org/abs/2302.13824v1
- Date: Mon, 27 Feb 2023 14:33:29 GMT
- Title: Dirichlet-based Uncertainty Calibration for Active Domain Adaptation
- Authors: Mixue Xie, Shuang Li, Rui Zhang, Chi Harold Liu
- Abstract summary: Active domain adaptation (DA) aims to maximally boost the model adaptation on a new target domain by actively selecting limited target data to annotate.
Traditional active learning methods may be less effective since they do not consider the domain shift issue.
We propose a itDirichlet-based Uncertainty (DUC) approach for active DA, which simultaneously achieves the mitigation of miscalibration and the selection of informative target samples.
- Score: 33.33529827699169
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active domain adaptation (DA) aims to maximally boost the model adaptation on
a new target domain by actively selecting limited target data to annotate,
whereas traditional active learning methods may be less effective since they do
not consider the domain shift issue. Despite active DA methods address this by
further proposing targetness to measure the representativeness of target domain
characteristics, their predictive uncertainty is usually based on the
prediction of deterministic models, which can easily be miscalibrated on data
with distribution shift. Considering this, we propose a \textit{Dirichlet-based
Uncertainty Calibration} (DUC) approach for active DA, which simultaneously
achieves the mitigation of miscalibration and the selection of informative
target samples. Specifically, we place a Dirichlet prior on the prediction and
interpret the prediction as a distribution on the probability simplex, rather
than a point estimate like deterministic models. This manner enables us to
consider all possible predictions, mitigating the miscalibration of unilateral
prediction. Then a two-round selection strategy based on different uncertainty
origins is designed to select target samples that are both representative of
target domain and conducive to discriminability. Extensive experiments on
cross-domain image classification and semantic segmentation validate the
superiority of DUC.
Related papers
- Cal-SFDA: Source-Free Domain-adaptive Semantic Segmentation with
Differentiable Expected Calibration Error [50.86671887712424]
The prevalence of domain adaptive semantic segmentation has prompted concerns regarding source domain data leakage.
To circumvent the requirement for source data, source-free domain adaptation has emerged as a viable solution.
We propose a novel calibration-guided source-free domain adaptive semantic segmentation framework.
arXiv Detail & Related papers (2023-08-06T03:28:34Z) - Calibrated Selective Classification [34.08454890436067]
We develop a new approach to selective classification in which we propose a method for rejecting examples with "uncertain" uncertainties.
We present a framework for learning selectively calibrated models, where a separate selector network is trained to improve the selective calibration error of a given base model.
We demonstrate the empirical effectiveness of our approach on multiple image classification and lung cancer risk assessment tasks.
arXiv Detail & Related papers (2022-08-25T13:31:09Z) - Uncertainty-guided Source-free Domain Adaptation [77.3844160723014]
Source-free domain adaptation (SFDA) aims to adapt a classifier to an unlabelled target data set by only using a pre-trained source model.
We propose quantifying the uncertainty in the source model predictions and utilizing it to guide the target adaptation.
arXiv Detail & Related papers (2022-08-16T08:03:30Z) - Learning Unbiased Transferability for Domain Adaptation by Uncertainty
Modeling [107.24387363079629]
Domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled or a less labeled but related target domain.
Due to the imbalance between the amount of annotated data in the source and target domains, only the target distribution is aligned to the source domain.
We propose a non-intrusive Unbiased Transferability Estimation Plug-in (UTEP) by modeling the uncertainty of a discriminator in adversarial-based DA methods to optimize unbiased transfer.
arXiv Detail & Related papers (2022-06-02T21:58:54Z) - Selecting Treatment Effects Models for Domain Adaptation Using Causal
Knowledge [82.5462771088607]
We propose a novel model selection metric specifically designed for ITE methods under the unsupervised domain adaptation setting.
In particular, we propose selecting models whose predictions of interventions' effects satisfy known causal structures in the target domain.
arXiv Detail & Related papers (2021-02-11T21:03:14Z) - Bi-Classifier Determinacy Maximization for Unsupervised Domain
Adaptation [24.9073164947711]
We present Bi-Classifier Determinacy Maximization(BCDM) to tackle this problem.
Motivated by the observation that target samples cannot always be separated distinctly by the decision boundary, we design a novel classifier determinacy disparity metric.
BCDM can generate discriminative representations by encouraging target predictive outputs to be consistent and determined.
arXiv Detail & Related papers (2020-12-13T07:55:39Z) - Estimating Generalization under Distribution Shifts via Domain-Invariant
Representations [75.74928159249225]
We use a set of domain-invariant predictors as a proxy for the unknown, true target labels.
The error of the resulting risk estimate depends on the target risk of the proxy model.
arXiv Detail & Related papers (2020-07-06T17:21:24Z) - Rectifying Pseudo Label Learning via Uncertainty Estimation for Domain
Adaptive Semantic Segmentation [49.295165476818866]
This paper focuses on the unsupervised domain adaptation of transferring the knowledge from the source domain to the target domain in the context of semantic segmentation.
Existing approaches usually regard the pseudo label as the ground truth to fully exploit the unlabeled target-domain data.
This paper proposes to explicitly estimate the prediction uncertainty during training to rectify the pseudo label learning.
arXiv Detail & Related papers (2020-03-08T12:37:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.