Adversarial Unsupervised Domain Adaptation with Conditional and Label
Shift: Infer, Align and Iterate
- URL: http://arxiv.org/abs/2107.13469v1
- Date: Wed, 28 Jul 2021 16:28:01 GMT
- Title: Adversarial Unsupervised Domain Adaptation with Conditional and Label
Shift: Infer, Align and Iterate
- Authors: Xiaofeng Liu, Zhenhua Guo, Site Li, Fangxu Xing, Jane You, C.-C. Jay
Kuo, Georges El Fakhri, Jonghye Woo
- Abstract summary: We propose an adversarial unsupervised domain adaptation (UDA) approach with the inherent conditional and label shifts.
We infer the marginal $p(y)$ and align $p(x|y)$ iteratively in the training, and precisely align the posterior $p(y|x)$ in testing.
- Score: 47.67549731439979
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we propose an adversarial unsupervised domain adaptation (UDA)
approach with the inherent conditional and label shifts, in which we aim to
align the distributions w.r.t. both $p(x|y)$ and $p(y)$. Since the label is
inaccessible in the target domain, the conventional adversarial UDA assumes
$p(y)$ is invariant across domains, and relies on aligning $p(x)$ as an
alternative to the $p(x|y)$ alignment. To address this, we provide a thorough
theoretical and empirical analysis of the conventional adversarial UDA methods
under both conditional and label shifts, and propose a novel and practical
alternative optimization scheme for adversarial UDA. Specifically, we infer the
marginal $p(y)$ and align $p(x|y)$ iteratively in the training, and precisely
align the posterior $p(y|x)$ in testing. Our experimental results demonstrate
its effectiveness on both classification and segmentation UDA, and partial UDA.
Related papers
- Unsupervised Learning under Latent Label Shift [21.508249151557244]
We introduce unsupervised learning under Latent Label Shift (LLS)
We show that our algorithm can leverage domain information to improve state of the art unsupervised classification methods.
arXiv Detail & Related papers (2022-07-26T20:52:53Z) - Understanding Gradual Domain Adaptation: Improved Analysis, Optimal Path
and Beyond [20.518134448156744]
Gradual domain adaptation (GDA) assumes a path of $(T-1)$ unlabeled intermediate domains bridging the source and target.
We prove a significantly improved generalization bound as $widetildeOleft(varepsilon_0+Oleft(sqrtlog(T)/nright)$, where $Delta$ is the average distributional distance between consecutive domains.
arXiv Detail & Related papers (2022-04-18T07:39:23Z) - Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised
Domain Adaptation [88.5448806952394]
We consider unsupervised domain adaptation (UDA), where labeled data from a source domain and unlabeled data from a target domain are used to learn a classifier for the target domain.
We show that contrastive pre-training, which learns features on unlabeled source and target data and then fine-tunes on labeled source data, is competitive with strong UDA methods.
arXiv Detail & Related papers (2022-04-01T16:56:26Z) - Seeking Similarities over Differences: Similarity-based Domain Alignment
for Adaptive Object Detection [86.98573522894961]
We propose a framework that generalizes the components commonly used by Unsupervised Domain Adaptation (UDA) algorithms for detection.
Specifically, we propose a novel UDA algorithm, ViSGA, that leverages the best design choices and introduces a simple but effective method to aggregate features at instance-level.
We show that both similarity-based grouping and adversarial training allows our model to focus on coarsely aligning feature groups, without being forced to match all instances across loosely aligned domains.
arXiv Detail & Related papers (2021-10-04T13:09:56Z) - Domain Generalization under Conditional and Label Shifts via Variational
Bayesian Inference [15.891459629460796]
We propose a domain generalization (DG) approach to learn on several labeled source domains.
We show that our framework is robust to the label shift and the cross-domain accuracy is significantly improved.
arXiv Detail & Related papers (2021-07-22T21:19:12Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z) - Rethinking Distributional Matching Based Domain Adaptation [111.15106414932413]
Domain adaptation (DA) is a technique that transfers predictive models trained on a labeled source domain to an unlabeled target domain.
Most popular DA algorithms are based on distributional matching (DM)
In this paper, we first systematically analyze the limitations of DM based methods, and then build new benchmarks with more realistic domain shifts.
arXiv Detail & Related papers (2020-06-23T21:55:14Z) - Domain Adaptation with Conditional Distribution Matching and Generalized
Label Shift [20.533804144992207]
Adversarial learning has demonstrated good performance in the unsupervised domain adaptation setting.
We propose a new assumption, generalized label shift ($GLS$), to improve robustness against mismatched label distributions.
Our algorithms outperform the base versions, with vast improvements for large label distribution mismatches.
arXiv Detail & Related papers (2020-03-10T00:35:23Z) - A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation [142.31610972922067]
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain.
We build on domain adversarial learning and propose a novel domain adaptation method BA$3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS)
Experimental results on multiple benchmarks demonstrate our BA$3$US surpasses state-of-the-arts for partial domain adaptation tasks.
arXiv Detail & Related papers (2020-03-05T11:37:06Z) - Partially-Shared Variational Auto-encoders for Unsupervised Domain
Adaptation with Target Shift [11.873435088539459]
This paper proposes a novel approach for unsupervised domain adaptation (UDA) with target shift.
The proposed method, partially shared variational autoencoders (PS-VAEs), uses pair-wise feature alignment instead of feature distribution matching.
PS-VAEs inter-convert domain of each sample by a CycleGAN-based architecture while preserving its label-related content.
arXiv Detail & Related papers (2020-01-22T06:41:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.