Optimal Transport for Conditional Domain Matching and Label Shift
- URL: http://arxiv.org/abs/2006.08161v4
- Date: Tue, 19 Oct 2021 10:44:43 GMT
- Title: Optimal Transport for Conditional Domain Matching and Label Shift
- Authors: Alain Rakotomamonjy (Criteo AI Lab), R\'emi Flamary (CMAP), Gilles
Gasso (DocApp - LITIS), Mokhtar Z. Alaya (LMAC, Compi\`egne), Maxime Berar
(DocApp - LITIS), Nicolas Courty (OBELIX)
- Abstract summary: We address the problem of unsupervised domain adaptation under the setting of generalized target shift.
For good generalization, it is necessary to learn a latent representation in which both marginals and class-conditional distributions are aligned across domains.
We propose a learning problem that minimizes importance weighted loss in the source domain and a Wasserstein distance between weighted marginals.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the problem of unsupervised domain adaptation under the setting of
generalized target shift (joint class-conditional and label shifts). For this
framework, we theoretically show that, for good generalization, it is necessary
to learn a latent representation in which both marginals and class-conditional
distributions are aligned across domains. For this sake, we propose a learning
problem that minimizes importance weighted loss in the source domain and a
Wasserstein distance between weighted marginals. For a proper weighting, we
provide an estimator of target label proportion by blending mixture estimation
and optimal matching by optimal transport. This estimation comes with
theoretical guarantees of correctness under mild assumptions. Our experimental
results show that our method performs better on average than competitors across
a range domain adaptation problems including \emph{digits},\emph{VisDA} and
\emph{Office}. Code for this paper is available at
\url{https://github.com/arakotom/mars_domain_adaptation}.
Related papers
- Optimal Transport for Domain Adaptation through Gaussian Mixture Models [7.292229955481438]
We propose a novel approach, where we model the data distributions through Gaussian mixture models.
The optimal transport solution gives us a matching between source and target domain mixture components.
We experiment with 2 domain adaptation benchmarks in fault diagnosis, showing that our methods have state-of-the-art performance.
arXiv Detail & Related papers (2024-03-18T09:32:33Z) - Bidirectional Domain Mixup for Domain Adaptive Semantic Segmentation [73.3083304858763]
This paper systematically studies the impact of mixup under the domain adaptaive semantic segmentation task.
In specific, we achieve domain mixup in two-step: cut and paste.
We provide extensive ablation experiments to empirically verify our main components of the framework.
arXiv Detail & Related papers (2023-03-17T05:22:44Z) - Constrained Maximum Cross-Domain Likelihood for Domain Generalization [14.91361835243516]
Domain generalization aims to learn a generalizable model on multiple source domains, which is expected to perform well on unseen test domains.
In this paper, we propose a novel domain generalization method, which minimizes the KL-divergence between posterior distributions from different domains.
Experiments on four standard benchmark datasets, i.e., Digits-DG, PACS, Office-Home and miniDomainNet, highlight the superior performance of our method.
arXiv Detail & Related papers (2022-10-09T03:41:02Z) - Towards Backwards-Compatible Data with Confounded Domain Adaptation [0.0]
We seek to achieve general-purpose data backwards compatibility by modifying generalized label shift (GLS)
We present a novel framework for this problem, based on minimizing the expected divergence between the source and target conditional distributions.
We provide concrete implementations using the Gaussian reverse Kullback-Leibler divergence and the maximum mean discrepancy.
arXiv Detail & Related papers (2022-03-23T20:53:55Z) - Mapping conditional distributions for domain adaptation under
generalized target shift [0.0]
We consider the problem of unsupervised domain adaptation (UDA) between a source and a target domain under conditional and label shift a.k.a Generalized Target Shift (GeTarS)
Recent approaches learn domain-invariant representations, yet they have practical limitations and rely on strong assumptions that may not hold in practice.
In this paper, we explore a novel and general approach to align pretrained representations, which circumvents existing drawbacks.
arXiv Detail & Related papers (2021-10-26T14:25:07Z) - Tune it the Right Way: Unsupervised Validation of Domain Adaptation via
Soft Neighborhood Density [125.64297244986552]
We propose an unsupervised validation criterion that measures the density of soft neighborhoods by computing the entropy of the similarity distribution between points.
Our criterion is simpler than competing validation methods, yet more effective.
arXiv Detail & Related papers (2021-08-24T17:41:45Z) - KL Guided Domain Adaptation [88.19298405363452]
Domain adaptation is an important problem and often needed for real-world applications.
A common approach in the domain adaptation literature is to learn a representation of the input that has the same distributions over the source and the target domain.
We show that with a probabilistic representation network, the KL term can be estimated efficiently via minibatch samples.
arXiv Detail & Related papers (2021-06-14T22:24:23Z) - A Unified Joint Maximum Mean Discrepancy for Domain Adaptation [73.44809425486767]
This paper theoretically derives a unified form of JMMD that is easy to optimize.
From the revealed unified JMMD, we illustrate that JMMD degrades the feature-label dependence that benefits to classification.
We propose a novel MMD matrix to promote the dependence, and devise a novel label kernel that is robust to label distribution shift.
arXiv Detail & Related papers (2021-01-25T09:46:14Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation [142.31610972922067]
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain.
We build on domain adversarial learning and propose a novel domain adaptation method BA$3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS)
Experimental results on multiple benchmarks demonstrate our BA$3$US surpasses state-of-the-arts for partial domain adaptation tasks.
arXiv Detail & Related papers (2020-03-05T11:37:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.