Domain Adaptation with Factorizable Joint Shift
- URL: http://arxiv.org/abs/2203.02902v1
- Date: Sun, 6 Mar 2022 07:58:51 GMT
- Title: Domain Adaptation with Factorizable Joint Shift
- Authors: Hao He, Yuzhe Yang, Hao Wang
- Abstract summary: We propose a new assumption, Factorizable Joint Shift (FJS), to handle the co-existence of sampling bias.
FJS assumes the independence of the bias between the two factors.
We also propose Joint Importance Aligning (JIA), a discriminative learning objective to obtain joint importance estimators.
- Score: 18.95213249351176
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing domain adaptation (DA) usually assumes the domain shift comes from
either the covariates or the labels. However, in real-world applications,
samples selected from different domains could have biases in both the
covariates and the labels. In this paper, we propose a new assumption,
Factorizable Joint Shift (FJS), to handle the co-existence of sampling bias in
covariates and labels. Although allowing for the shift from both sides, FJS
assumes the independence of the bias between the two factors. We provide
theoretical and empirical understandings about when FJS degenerates to prior
assumptions and when it is necessary. We further propose Joint Importance
Aligning (JIA), a discriminative learning objective to obtain joint importance
estimators for both supervised and unsupervised domain adaptation. Our method
can be seamlessly incorporated with existing domain adaptation algorithms for
better importance estimation and weighting on the training data. Experiments on
a synthetic dataset demonstrate the advantage of our method.
Related papers
- Proxy Methods for Domain Adaptation [78.03254010884783]
proxy variables allow for adaptation to distribution shift without explicitly recovering or modeling latent variables.
We develop a two-stage kernel estimation approach to adapt to complex distribution shifts in both settings.
arXiv Detail & Related papers (2024-03-12T09:32:41Z) - Bidirectional Domain Mixup for Domain Adaptive Semantic Segmentation [73.3083304858763]
This paper systematically studies the impact of mixup under the domain adaptaive semantic segmentation task.
In specific, we achieve domain mixup in two-step: cut and paste.
We provide extensive ablation experiments to empirically verify our main components of the framework.
arXiv Detail & Related papers (2023-03-17T05:22:44Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Towards Backwards-Compatible Data with Confounded Domain Adaptation [0.0]
We seek to achieve general-purpose data backwards compatibility by modifying generalized label shift (GLS)
We present a novel framework for this problem, based on minimizing the expected divergence between the source and target conditional distributions.
We provide concrete implementations using the Gaussian reverse Kullback-Leibler divergence and the maximum mean discrepancy.
arXiv Detail & Related papers (2022-03-23T20:53:55Z) - Instrumental Variable-Driven Domain Generalization with Unobserved
Confounders [53.735614014067394]
Domain generalization (DG) aims to learn from multiple source domains a model that can generalize well on unseen target domains.
We propose an instrumental variable-driven DG method (IV-DG) by removing the bias of the unobserved confounders with two-stage learning.
In the first stage, it learns the conditional distribution of the input features of one domain given input features of another domain.
In the second stage, it estimates the relationship by predicting labels with the learned conditional distribution.
arXiv Detail & Related papers (2021-10-04T13:32:57Z) - Semantic Concentration for Domain Adaptation [23.706231329913113]
Domain adaptation (DA) paves the way for label annotation and dataset bias issues by the knowledge transfer from a label-rich source domain to a related but unlabeled target domain.
A mainstream of DA methods is to align the feature distributions of the two domains.
We propose Semantic Concentration for Domain Adaptation to encourage the model to concentrate on the most principal features.
arXiv Detail & Related papers (2021-08-12T13:04:36Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z) - Robust Fairness under Covariate Shift [11.151913007808927]
Making predictions that are fair with regard to protected group membership has become an important requirement for classification algorithms.
We propose an approach that obtains the predictor that is robust to the worst-case in terms of target performance.
arXiv Detail & Related papers (2020-10-11T04:42:01Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - NestedVAE: Isolating Common Factors via Weak Supervision [45.366986365879505]
We identify the connection between the task of bias reduction and that of isolating factors common between domains.
To isolate the common factors we combine the theory of deep latent variable models with information bottleneck theory.
Two outer VAEs with shared weights attempt to reconstruct the input and infer a latent space, whilst a nested VAE attempts to reconstruct the latent representation of one image, from the latent representation of its paired image.
arXiv Detail & Related papers (2020-02-26T15:49:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.