On Localized Discrepancy for Domain Adaptation
- URL: http://arxiv.org/abs/2008.06242v1
- Date: Fri, 14 Aug 2020 08:30:02 GMT
- Title: On Localized Discrepancy for Domain Adaptation
- Authors: Yuchen Zhang, Mingsheng Long, Jianmin Wang, Michael I. Jordan
- Abstract summary: This paper studies the localized discrepancies defined on the hypothesis space after localization.
Their values will be different if we exchange the two domains, thus can reveal asymmetric transfer difficulties.
- Score: 146.4580736832752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose the discrepancy-based generalization theories for unsupervised
domain adaptation. Previous theories introduced distribution discrepancies
defined as the supremum over complete hypothesis space. The hypothesis space
may contain hypotheses that lead to unnecessary overestimation of the risk
bound. This paper studies the localized discrepancies defined on the hypothesis
space after localization. First, we show that these discrepancies have
desirable properties. They could be significantly smaller than the pervious
discrepancies. Their values will be different if we exchange the two domains,
thus can reveal asymmetric transfer difficulties. Next, we derive improved
generalization bounds with these discrepancies. We show that the discrepancies
could influence the rate of the sample complexity. Finally, we further extend
the localized discrepancies for achieving super transfer and derive
generalization bounds that could be even more sample-efficient on source
domain.
Related papers
- A New Theoretical Perspective on Data Heterogeneity in Federated Optimization [39.75009345804017]
In federated learning (FL), data heterogeneity is the main reason that existing theoretical analyses are pessimistic about the convergence rate.
In particular, for many FL algorithms, the convergence rate grows dramatically when the number of local updates becomes large.
This paper aims to bridge this gap between theoretical understanding and practical performance by providing a theoretical analysis from a new perspective.
arXiv Detail & Related papers (2024-07-22T11:52:58Z) - Domain Adaptation with Cauchy-Schwarz Divergence [39.36943882475589]
We introduce Cauchy-Schwarz divergence to the problem of unsupervised domain adaptation (UDA)
The CS divergence offers a theoretically tighter generalization error bound than the popular Kullback-Leibler divergence.
We show how the CS divergence can be conveniently used in both distance metric- or adversarial training-based UDA frameworks.
arXiv Detail & Related papers (2024-05-30T12:01:12Z) - Proxy Methods for Domain Adaptation [78.03254010884783]
proxy variables allow for adaptation to distribution shift without explicitly recovering or modeling latent variables.
We develop a two-stage kernel estimation approach to adapt to complex distribution shifts in both settings.
arXiv Detail & Related papers (2024-03-12T09:32:41Z) - Towards Identifiable Unsupervised Domain Translation: A Diversified
Distribution Matching Approach [14.025593338693698]
Unsupervised domain translation (UDT) aims to find functions that convert samples from one domain to another without changing the high-level semantic meaning.
This study delves into the core identifiability inquiry and introduces an MPA elimination theory.
Our theory leads to a UDT learner using distribution matching over auxiliary variable-induced subsets of the domains.
arXiv Detail & Related papers (2024-01-18T01:07:00Z) - Auditing for Spatial Fairness [5.048742886625779]
We study algorithmic fairness when the protected attribute is location.
Similar to established notions of algorithmic fairness, we define spatial fairness as the statistical independence of outcomes from location.
arXiv Detail & Related papers (2023-02-23T20:56:18Z) - Constrained Maximum Cross-Domain Likelihood for Domain Generalization [14.91361835243516]
Domain generalization aims to learn a generalizable model on multiple source domains, which is expected to perform well on unseen test domains.
In this paper, we propose a novel domain generalization method, which minimizes the KL-divergence between posterior distributions from different domains.
Experiments on four standard benchmark datasets, i.e., Digits-DG, PACS, Office-Home and miniDomainNet, highlight the superior performance of our method.
arXiv Detail & Related papers (2022-10-09T03:41:02Z) - Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - Fundamental Limits and Tradeoffs in Invariant Representation Learning [99.2368462915979]
Many machine learning applications involve learning representations that achieve two competing goals.
Minimax game-theoretic formulation represents a fundamental tradeoff between accuracy and invariance.
We provide an information-theoretic analysis of this general and important problem under both classification and regression settings.
arXiv Detail & Related papers (2020-12-19T15:24:04Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Few-shot Domain Adaptation by Causal Mechanism Transfer [107.08605582020866]
We study few-shot supervised domain adaptation (DA) for regression problems, where only a few labeled target domain data and many labeled source domain data are available.
Many of the current DA methods base their transfer assumptions on either parametrized distribution shift or apparent distribution similarities.
We propose mechanism transfer, a meta-distributional scenario in which a data generating mechanism is invariant among domains.
arXiv Detail & Related papers (2020-02-10T02:16:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.