Domain Generalization under Conditional and Label Shifts via Variational
Bayesian Inference
- URL: http://arxiv.org/abs/2107.10931v1
- Date: Thu, 22 Jul 2021 21:19:12 GMT
- Title: Domain Generalization under Conditional and Label Shifts via Variational
Bayesian Inference
- Authors: Xiaofeng Liu, Bo Hu, Linghao Jin, Xu Han, Fangxu Xing, Jinsong Ouyang,
Jun Lu, Georges EL Fakhri, Jonghye Woo
- Abstract summary: We propose a domain generalization (DG) approach to learn on several labeled source domains.
We show that our framework is robust to the label shift and the cross-domain accuracy is significantly improved.
- Score: 15.891459629460796
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we propose a domain generalization (DG) approach to learn on
several labeled source domains and transfer knowledge to a target domain that
is inaccessible in training. Considering the inherent conditional and label
shifts, we would expect the alignment of $p(x|y)$ and $p(y)$. However, the
widely used domain invariant feature learning (IFL) methods relies on aligning
the marginal concept shift w.r.t. $p(x)$, which rests on an unrealistic
assumption that $p(y)$ is invariant across domains. We thereby propose a novel
variational Bayesian inference framework to enforce the conditional
distribution alignment w.r.t. $p(x|y)$ via the prior distribution matching in a
latent space, which also takes the marginal label shift w.r.t. $p(y)$ into
consideration with the posterior alignment. Extensive experiments on various
benchmarks demonstrate that our framework is robust to the label shift and the
cross-domain accuracy is significantly improved, thereby achieving superior
performance over the conventional IFL counterparts.
Related papers
- Inter-Domain Mixup for Semi-Supervised Domain Adaptation [108.40945109477886]
Semi-supervised domain adaptation (SSDA) aims to bridge source and target domain distributions, with a small number of target labels available.
Existing SSDA work fails to make full use of label information from both source and target domains for feature alignment across domains.
This paper presents a novel SSDA approach, Inter-domain Mixup with Neighborhood Expansion (IDMNE), to tackle this issue.
arXiv Detail & Related papers (2024-01-21T10:20:46Z) - Improving Domain Generalization with Domain Relations [77.63345406973097]
This paper focuses on domain shifts, which occur when the model is applied to new domains that are different from the ones it was trained on.
We propose a new approach called D$3$G to learn domain-specific models.
Our results show that D$3$G consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-06T08:11:16Z) - Class Overwhelms: Mutual Conditional Blended-Target Domain Adaptation [5.77521191881575]
Current methods of blended targets domain adaptation (BTDA) usually infer or consider domain label information.
We propose a categorical domain discriminator guided by uncertainty to explicitly model and directly align categorical distributions.
Our approach outperforms the state-of-the-art in BTDA even compared with methods utilizing domain labels.
arXiv Detail & Related papers (2023-02-03T03:08:31Z) - Domain Adaptation under Open Set Label Shift [39.424134505152544]
We introduce the problem of domain adaptation under Open Set Label Shift (OSLS)
OSLS subsumes domain adaptation under label shift and Positive-Unlabeled (PU) learning.
We propose practical methods for both tasks that leverage black-box predictors.
arXiv Detail & Related papers (2022-07-26T17:09:48Z) - Domain-shift adaptation via linear transformations [11.541238742226199]
A predictor, $f_A, learned with data from a source domain (A) might not be accurate on a target domain (B) when their distributions are different.
We propose an approach to project the source and target domains into a lower-dimensional, common space.
We show the effectiveness of our approach in simulated data and in binary digit classification tasks, obtaining improvements up to 48% accuracy when correcting for the domain shift in the data.
arXiv Detail & Related papers (2022-01-14T02:49:03Z) - Adversarial Unsupervised Domain Adaptation with Conditional and Label
Shift: Infer, Align and Iterate [47.67549731439979]
We propose an adversarial unsupervised domain adaptation (UDA) approach with the inherent conditional and label shifts.
We infer the marginal $p(y)$ and align $p(x|y)$ iteratively in the training, and precisely align the posterior $p(y|x)$ in testing.
arXiv Detail & Related papers (2021-07-28T16:28:01Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Simultaneous Semantic Alignment Network for Heterogeneous Domain
Adaptation [67.37606333193357]
We propose aSimultaneous Semantic Alignment Network (SSAN) to simultaneously exploit correlations among categories and align the centroids for each category across domains.
By leveraging target pseudo-labels, a robust triplet-centroid alignment mechanism is explicitly applied to align feature representations for each category.
Experiments on various HDA tasks across text-to-image, image-to-image and text-to-text successfully validate the superiority of our SSAN against state-of-the-art HDA methods.
arXiv Detail & Related papers (2020-08-04T16:20:37Z) - Domain Adaptation with Conditional Distribution Matching and Generalized
Label Shift [20.533804144992207]
Adversarial learning has demonstrated good performance in the unsupervised domain adaptation setting.
We propose a new assumption, generalized label shift ($GLS$), to improve robustness against mismatched label distributions.
Our algorithms outperform the base versions, with vast improvements for large label distribution mismatches.
arXiv Detail & Related papers (2020-03-10T00:35:23Z) - A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation [142.31610972922067]
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain.
We build on domain adversarial learning and propose a novel domain adaptation method BA$3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS)
Experimental results on multiple benchmarks demonstrate our BA$3$US surpasses state-of-the-arts for partial domain adaptation tasks.
arXiv Detail & Related papers (2020-03-05T11:37:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.