Mitigating Uncertainty of Classifier for Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2107.00727v1
- Date: Thu, 1 Jul 2021 20:08:15 GMT
- Title: Mitigating Uncertainty of Classifier for Unsupervised Domain Adaptation
- Authors: Shanu Kumar, Vinod Kumar Kurmi, Praphul Singh, Vinay P Namboodiri
- Abstract summary: We thoroughly examine the role of a classifier in terms of matching source and target distributions.
Our analysis suggests that using these three distributions does result in a consistently improved performance on all the datasets.
- Score: 21.56619121620334
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding unsupervised domain adaptation has been an important task that
has been well explored. However, the wide variety of methods have not analyzed
the role of a classifier's performance in detail. In this paper, we thoroughly
examine the role of a classifier in terms of matching source and target
distributions. We specifically investigate the classifier ability by matching
a) the distribution of features, b) probabilistic uncertainty for samples and
c) certainty activation mappings. Our analysis suggests that using these three
distributions does result in a consistently improved performance on all the
datasets. Our work thus extends present knowledge on the role of the various
distributions obtained from the classifier towards solving unsupervised domain
adaptation.
Related papers
- Learning Linear Causal Representations from Interventions under General
Nonlinear Mixing [52.66151568785088]
We prove strong identifiability results given unknown single-node interventions without access to the intervention targets.
This is the first instance of causal identifiability from non-paired interventions for deep neural network embeddings.
arXiv Detail & Related papers (2023-06-04T02:32:12Z) - Adapting to Latent Subgroup Shifts via Concepts and Proxies [82.01141290360562]
We show that the optimal target predictor can be non-parametrically identified with the help of concept and proxy variables available only in the source domain.
For continuous observations, we propose a latent variable model specific to the data generation process at hand.
arXiv Detail & Related papers (2022-12-21T18:30:22Z) - Empirical Study on Optimizer Selection for Out-of-Distribution
Generalization [16.386766049451317]
Modern deep learning systems do not generalize well when the test data distribution is slightly different to the training data distribution.
In this study, we examine the performance of popular first-order generalizations for different classes of distributional shift.
arXiv Detail & Related papers (2022-11-15T23:56:30Z) - Unsupervised domain adaptation with non-stochastic missing data [0.6608945629704323]
We consider unsupervised domain adaptation (UDA) for classification problems in the presence of missing data in the unlabelled target domain.
Imputation is performed in a domain-invariant latent space and leverages indirect supervision from a complete source domain.
We show the benefits of jointly performing adaptation, classification and imputation on datasets.
arXiv Detail & Related papers (2021-09-16T06:37:07Z) - Learning to Transfer with von Neumann Conditional Divergence [14.926485055255942]
We introduce the recently proposed von Neumann conditional divergence to improve the transferability across multiple domains.
We design novel learning objectives assuming those source tasks are observed either simultaneously or sequentially.
In both scenarios, we obtain favorable performance against state-of-the-art methods in terms of smaller generalization error on new tasks and less catastrophic forgetting on source tasks (in the sequential setup)
arXiv Detail & Related papers (2021-08-07T22:18:23Z) - Predicting with Confidence on Unseen Distributions [90.68414180153897]
We connect domain adaptation and predictive uncertainty literature to predict model accuracy on challenging unseen distributions.
We find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts.
We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference.
arXiv Detail & Related papers (2021-07-07T15:50:18Z) - Bi-Classifier Determinacy Maximization for Unsupervised Domain
Adaptation [24.9073164947711]
We present Bi-Classifier Determinacy Maximization(BCDM) to tackle this problem.
Motivated by the observation that target samples cannot always be separated distinctly by the decision boundary, we design a novel classifier determinacy disparity metric.
BCDM can generate discriminative representations by encouraging target predictive outputs to be consistent and determined.
arXiv Detail & Related papers (2020-12-13T07:55:39Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Learning Calibrated Uncertainties for Domain Shift: A Distributionally
Robust Learning Approach [150.8920602230832]
We propose a framework for learning calibrated uncertainties under domain shifts.
In particular, the density ratio estimation reflects the closeness of a target (test) sample to the source (training) distribution.
We show that our proposed method generates calibrated uncertainties that benefit downstream tasks.
arXiv Detail & Related papers (2020-10-08T02:10:54Z) - Estimating Generalization under Distribution Shifts via Domain-Invariant
Representations [75.74928159249225]
We use a set of domain-invariant predictors as a proxy for the unknown, true target labels.
The error of the resulting risk estimate depends on the target risk of the proxy model.
arXiv Detail & Related papers (2020-07-06T17:21:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.