Domain Adaptation with Conditional Distribution Matching and Generalized
Label Shift
- URL: http://arxiv.org/abs/2003.04475v3
- Date: Fri, 11 Dec 2020 21:59:58 GMT
- Title: Domain Adaptation with Conditional Distribution Matching and Generalized
Label Shift
- Authors: Remi Tachet, Han Zhao, Yu-Xiang Wang and Geoff Gordon
- Abstract summary: Adversarial learning has demonstrated good performance in the unsupervised domain adaptation setting.
We propose a new assumption, generalized label shift ($GLS$), to improve robustness against mismatched label distributions.
Our algorithms outperform the base versions, with vast improvements for large label distribution mismatches.
- Score: 20.533804144992207
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial learning has demonstrated good performance in the unsupervised
domain adaptation setting, by learning domain-invariant representations.
However, recent work has shown limitations of this approach when label
distributions differ between the source and target domains. In this paper, we
propose a new assumption, generalized label shift ($GLS$), to improve
robustness against mismatched label distributions. $GLS$ states that,
conditioned on the label, there exists a representation of the input that is
invariant between the source and target domains. Under $GLS$, we provide
theoretical guarantees on the transfer performance of any classifier. We also
devise necessary and sufficient conditions for $GLS$ to hold, by using an
estimation of the relative class weights between domains and an appropriate
reweighting of samples. Our weight estimation method could be straightforwardly
and generically applied in existing domain adaptation (DA) algorithms that
learn domain-invariant representations, with small computational overhead. In
particular, we modify three DA algorithms, JAN, DANN and CDAN, and evaluate
their performance on standard and artificial DA tasks. Our algorithms
outperform the base versions, with vast improvements for large label
distribution mismatches. Our code is available at https://tinyurl.com/y585xt6j.
Related papers
- Domain Adaptation under Open Set Label Shift [39.424134505152544]
We introduce the problem of domain adaptation under Open Set Label Shift (OSLS)
OSLS subsumes domain adaptation under label shift and Positive-Unlabeled (PU) learning.
We propose practical methods for both tasks that leverage black-box predictors.
arXiv Detail & Related papers (2022-07-26T17:09:48Z) - Cycle Label-Consistent Networks for Unsupervised Domain Adaptation [57.29464116557734]
Domain adaptation aims to leverage a labeled source domain to learn a classifier for the unlabeled target domain with a different distribution.
We propose a simple yet efficient domain adaptation method, i.e. Cycle Label-Consistent Network (CLCN), by exploiting the cycle consistency of classification label.
We demonstrate the effectiveness of our approach on MNIST-USPS-SVHN, Office-31, Office-Home and Image CLEF-DA benchmarks.
arXiv Detail & Related papers (2022-05-27T13:09:08Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Domain Generalization under Conditional and Label Shifts via Variational
Bayesian Inference [15.891459629460796]
We propose a domain generalization (DG) approach to learn on several labeled source domains.
We show that our framework is robust to the label shift and the cross-domain accuracy is significantly improved.
arXiv Detail & Related papers (2021-07-22T21:19:12Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z) - Sparsely-Labeled Source Assisted Domain Adaptation [64.75698236688729]
This paper proposes a novel Sparsely-Labeled Source Assisted Domain Adaptation (SLSA-DA) algorithm.
Due to the label scarcity problem, the projected clustering is conducted on both the source and target domains.
arXiv Detail & Related papers (2020-05-08T15:37:35Z) - A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation [142.31610972922067]
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain.
We build on domain adversarial learning and propose a novel domain adaptation method BA$3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS)
Experimental results on multiple benchmarks demonstrate our BA$3$US surpasses state-of-the-arts for partial domain adaptation tasks.
arXiv Detail & Related papers (2020-03-05T11:37:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.