Theoretical Guarantees for Domain Adaptation with Hierarchical Optimal
Transport
- URL: http://arxiv.org/abs/2210.13331v1
- Date: Mon, 24 Oct 2022 15:34:09 GMT
- Title: Theoretical Guarantees for Domain Adaptation with Hierarchical Optimal
Transport
- Authors: Mourad El Hamri, Youn\`es Bennani, Issam Falih
- Abstract summary: Domain adaptation arises as an important problem in statistical learning theory.
Recent advances show that the success of domain adaptation algorithms heavily relies on their ability to minimize the divergence between the probability distributions of the source and target domains.
We propose a new theoretical framework for domain adaptation through hierarchical optimal transport.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Domain adaptation arises as an important problem in statistical learning
theory when the data-generating processes differ between training and test
samples, respectively called source and target domains. Recent theoretical
advances show that the success of domain adaptation algorithms heavily relies
on their ability to minimize the divergence between the probability
distributions of the source and target domains. However, minimizing this
divergence cannot be done independently of the minimization of other key
ingredients such as the source risk or the combined error of the ideal joint
hypothesis. The trade-off between these terms is often ensured by algorithmic
solutions that remain implicit and not directly reflected by the theoretical
guarantees. To get to the bottom of this issue, we propose in this paper a new
theoretical framework for domain adaptation through hierarchical optimal
transport. This framework provides more explicit generalization bounds and
allows us to consider the natural hierarchical organization of samples in both
domains into classes or clusters. Additionally, we provide a new divergence
measure between the source and target domains called Hierarchical Wasserstein
distance that indicates under mild assumptions, which structures have to be
aligned to lead to a successful adaptation.
Related papers
- Optimal Aggregation of Prediction Intervals under Unsupervised Domain Shift [9.387706860375461]
A distribution shift occurs when the underlying data-generating process changes, leading to a deviation in the model's performance.
The prediction interval serves as a crucial tool for characterizing uncertainties induced by their underlying distribution.
We propose methodologies for aggregating prediction intervals to obtain one with minimal width and adequate coverage on the target domain.
arXiv Detail & Related papers (2024-05-16T17:55:42Z) - Constrained Maximum Cross-Domain Likelihood for Domain Generalization [14.91361835243516]
Domain generalization aims to learn a generalizable model on multiple source domains, which is expected to perform well on unseen test domains.
In this paper, we propose a novel domain generalization method, which minimizes the KL-divergence between posterior distributions from different domains.
Experiments on four standard benchmark datasets, i.e., Digits-DG, PACS, Office-Home and miniDomainNet, highlight the superior performance of our method.
arXiv Detail & Related papers (2022-10-09T03:41:02Z) - Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - Generalizing to Unseen Domains with Wasserstein Distributional Robustness under Limited Source Knowledge [22.285156929279207]
Domain generalization aims at learning a universal model that performs well on unseen target domains.
We propose a novel domain generalization framework called Wasserstein Distributionally Robust Domain Generalization (WDRDG)
arXiv Detail & Related papers (2022-07-11T14:46:50Z) - Learning Unbiased Transferability for Domain Adaptation by Uncertainty
Modeling [107.24387363079629]
Domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled or a less labeled but related target domain.
Due to the imbalance between the amount of annotated data in the source and target domains, only the target distribution is aligned to the source domain.
We propose a non-intrusive Unbiased Transferability Estimation Plug-in (UTEP) by modeling the uncertainty of a discriminator in adversarial-based DA methods to optimize unbiased transfer.
arXiv Detail & Related papers (2022-06-02T21:58:54Z) - Maximizing Conditional Independence for Unsupervised Domain Adaptation [9.533515002375545]
We study how to transfer a learner from a labeled source domain to an unlabeled target domain with different distributions.
In addition to unsupervised domain adaptation, we extend our method to the multi-source scenario in a natural and elegant way.
arXiv Detail & Related papers (2022-03-07T08:59:21Z) - KL Guided Domain Adaptation [88.19298405363452]
Domain adaptation is an important problem and often needed for real-world applications.
A common approach in the domain adaptation literature is to learn a representation of the input that has the same distributions over the source and the target domain.
We show that with a probabilistic representation network, the KL term can be estimated efficiently via minibatch samples.
arXiv Detail & Related papers (2021-06-14T22:24:23Z) - A Theory of Label Propagation for Subpopulation Shift [61.408438422417326]
We propose a provably effective framework for domain adaptation based on label propagation.
We obtain end-to-end finite-sample guarantees on the entire algorithm.
We extend our theoretical framework to a more general setting of source-to-target transfer based on a third unlabeled dataset.
arXiv Detail & Related papers (2021-02-22T17:27:47Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Log-Likelihood Ratio Minimizing Flows: Towards Robust and Quantifiable
Neural Distribution Alignment [52.02794488304448]
We propose a new distribution alignment method based on a log-likelihood ratio statistic and normalizing flows.
We experimentally verify that minimizing the resulting objective results in domain alignment that preserves the local structure of input domains.
arXiv Detail & Related papers (2020-03-26T22:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.