Domain Adaptation with Cauchy-Schwarz Divergence
- URL: http://arxiv.org/abs/2405.19978v1
- Date: Thu, 30 May 2024 12:01:12 GMT
- Title: Domain Adaptation with Cauchy-Schwarz Divergence
- Authors: Wenzhe Yin, Shujian Yu, Yicong Lin, Jie Liu, Jan-Jakob Sonke, Efstratios Gavves,
- Abstract summary: We introduce Cauchy-Schwarz divergence to the problem of unsupervised domain adaptation (UDA)
The CS divergence offers a theoretically tighter generalization error bound than the popular Kullback-Leibler divergence.
We show how the CS divergence can be conveniently used in both distance metric- or adversarial training-based UDA frameworks.
- Score: 39.36943882475589
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Domain adaptation aims to use training data from one or multiple source domains to learn a hypothesis that can be generalized to a different, but related, target domain. As such, having a reliable measure for evaluating the discrepancy of both marginal and conditional distributions is crucial. We introduce Cauchy-Schwarz (CS) divergence to the problem of unsupervised domain adaptation (UDA). The CS divergence offers a theoretically tighter generalization error bound than the popular Kullback-Leibler divergence. This holds for the general case of supervised learning, including multi-class classification and regression. Furthermore, we illustrate that the CS divergence enables a simple estimator on the discrepancy of both marginal and conditional distributions between source and target domains in the representation space, without requiring any distributional assumptions. We provide multiple examples to illustrate how the CS divergence can be conveniently used in both distance metric- or adversarial training-based UDA frameworks, resulting in compelling performance.
Related papers
- Guidance Not Obstruction: A Conjugate Consistent Enhanced Strategy for Domain Generalization [50.04665252665413]
We argue that acquiring discriminative generalization between classes within domains is crucial.
In contrast to seeking distribution alignment, we endeavor to safeguard domain-related between-class discrimination.
We employ a novel distribution-level Universum strategy to generate supplementary diverse domain-related class-conditional distributions.
arXiv Detail & Related papers (2024-12-13T12:25:16Z) - Generalizing to any diverse distribution: uniformity, gentle finetuning and rebalancing [55.791818510796645]
We aim to develop models that generalize well to any diverse test distribution, even if the latter deviates significantly from the training data.
Various approaches like domain adaptation, domain generalization, and robust optimization attempt to address the out-of-distribution challenge.
We adopt a more conservative perspective by accounting for the worst-case error across all sufficiently diverse test distributions within a known domain.
arXiv Detail & Related papers (2024-10-08T12:26:48Z) - Proxy Methods for Domain Adaptation [78.03254010884783]
proxy variables allow for adaptation to distribution shift without explicitly recovering or modeling latent variables.
We develop a two-stage kernel estimation approach to adapt to complex distribution shifts in both settings.
arXiv Detail & Related papers (2024-03-12T09:32:41Z) - Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - Mapping conditional distributions for domain adaptation under
generalized target shift [0.0]
We consider the problem of unsupervised domain adaptation (UDA) between a source and a target domain under conditional and label shift a.k.a Generalized Target Shift (GeTarS)
Recent approaches learn domain-invariant representations, yet they have practical limitations and rely on strong assumptions that may not hold in practice.
In this paper, we explore a novel and general approach to align pretrained representations, which circumvents existing drawbacks.
arXiv Detail & Related papers (2021-10-26T14:25:07Z) - Conditional Bures Metric for Domain Adaptation [14.528711361447712]
Unsupervised domain adaptation (UDA) has attracted widespread attention in recent years.
Previous UDA methods assume the marginal distributions of different domains are shifted while ignoring the discriminant information in the label distributions.
In this work, we focus on the conditional distribution shift problem which is of great concern to current conditional invariant models.
arXiv Detail & Related papers (2021-07-31T18:06:31Z) - Contrastive ACE: Domain Generalization Through Alignment of Causal
Mechanisms [34.99779761100095]
Domain generalization aims to learn knowledge invariant across different distributions.
We consider the causal invariance of the average causal effect of the features to the labels.
arXiv Detail & Related papers (2021-06-02T04:01:22Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.