Mapping conditional distributions for domain adaptation under
generalized target shift
- URL: http://arxiv.org/abs/2110.15057v1
- Date: Tue, 26 Oct 2021 14:25:07 GMT
- Title: Mapping conditional distributions for domain adaptation under
generalized target shift
- Authors: Matthieu Kirchmeyer (MLIA), Alain Rakotomamonjy (LITIS), Emmanuel de
Bezenac (MLIA), Patrick Gallinari (MLIA)
- Abstract summary: We consider the problem of unsupervised domain adaptation (UDA) between a source and a target domain under conditional and label shift a.k.a Generalized Target Shift (GeTarS)
Recent approaches learn domain-invariant representations, yet they have practical limitations and rely on strong assumptions that may not hold in practice.
In this paper, we explore a novel and general approach to align pretrained representations, which circumvents existing drawbacks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of unsupervised domain adaptation (UDA) between a
source and a target domain under conditional and label shift a.k.a Generalized
Target Shift (GeTarS). Unlike simpler UDA settings, few works have addressed
this challenging problem. Recent approaches learn domain-invariant
representations, yet they have practical limitations and rely on strong
assumptions that may not hold in practice. In this paper, we explore a novel
and general approach to align pretrained representations, which circumvents
existing drawbacks. Instead of constraining representation invariance, it
learns an optimal transport map, implemented as a NN, which maps source
representations onto target ones. Our approach is flexible and scalable, it
preserves the problem's structure and it has strong theoretical guarantees
under mild assumptions. In particular, our solution is unique, matches
conditional distributions across domains, recovers target proportions and
explicitly controls the target generalization risk. Through an exhaustive
comparison on several datasets, we challenge the state-of-the-art in GeTarS.
Related papers
- Domain Adaptation with Cauchy-Schwarz Divergence [39.36943882475589]
We introduce Cauchy-Schwarz divergence to the problem of unsupervised domain adaptation (UDA)
The CS divergence offers a theoretically tighter generalization error bound than the popular Kullback-Leibler divergence.
We show how the CS divergence can be conveniently used in both distance metric- or adversarial training-based UDA frameworks.
arXiv Detail & Related papers (2024-05-30T12:01:12Z) - Domain Adaptation via Rebalanced Sub-domain Alignment [22.68115322836635]
Unsupervised domain adaptation (UDA) is a technique used to transfer knowledge from a labeled source domain to a related unlabeled target domain.
Many UDA methods have shown success in the past, but they often assume that the source and target domains must have identical class label distributions.
We propose a novel generalization bound that reweights source classification error by aligning source and target sub-domains.
arXiv Detail & Related papers (2023-02-03T21:30:40Z) - Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - Generalizing to Unseen Domains with Wasserstein Distributional Robustness under Limited Source Knowledge [22.285156929279207]
Domain generalization aims at learning a universal model that performs well on unseen target domains.
We propose a novel domain generalization framework called Wasserstein Distributionally Robust Domain Generalization (WDRDG)
arXiv Detail & Related papers (2022-07-11T14:46:50Z) - KL Guided Domain Adaptation [88.19298405363452]
Domain adaptation is an important problem and often needed for real-world applications.
A common approach in the domain adaptation literature is to learn a representation of the input that has the same distributions over the source and the target domain.
We show that with a probabilistic representation network, the KL term can be estimated efficiently via minibatch samples.
arXiv Detail & Related papers (2021-06-14T22:24:23Z) - SAND-mask: An Enhanced Gradient Masking Strategy for the Discovery of
Invariances in Domain Generalization [7.253255826783766]
We propose a masking strategy, which determines a continuous weight based on the agreement of gradients that flow in each edge of network.
SAND-mask is validated over the Domainbed benchmark for domain generalization.
arXiv Detail & Related papers (2021-06-04T05:20:54Z) - A Theory of Label Propagation for Subpopulation Shift [61.408438422417326]
We propose a provably effective framework for domain adaptation based on label propagation.
We obtain end-to-end finite-sample guarantees on the entire algorithm.
We extend our theoretical framework to a more general setting of source-to-target transfer based on a third unlabeled dataset.
arXiv Detail & Related papers (2021-02-22T17:27:47Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Uncertainty-Aware Consistency Regularization for Cross-Domain Semantic
Segmentation [63.75774438196315]
Unsupervised domain adaptation (UDA) aims to adapt existing models of the source domain to a new target domain with only unlabeled data.
Most existing methods suffer from noticeable negative transfer resulting from either the error-prone discriminator network or the unreasonable teacher model.
We propose an uncertainty-aware consistency regularization method for cross-domain semantic segmentation.
arXiv Detail & Related papers (2020-04-19T15:30:26Z) - A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation [142.31610972922067]
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain.
We build on domain adversarial learning and propose a novel domain adaptation method BA$3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS)
Experimental results on multiple benchmarks demonstrate our BA$3$US surpasses state-of-the-arts for partial domain adaptation tasks.
arXiv Detail & Related papers (2020-03-05T11:37:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.