Conditional Bures Metric for Domain Adaptation
- URL: http://arxiv.org/abs/2108.00302v1
- Date: Sat, 31 Jul 2021 18:06:31 GMT
- Title: Conditional Bures Metric for Domain Adaptation
- Authors: You-Wei Luo and Chuan-Xian Ren
- Abstract summary: Unsupervised domain adaptation (UDA) has attracted widespread attention in recent years.
Previous UDA methods assume the marginal distributions of different domains are shifted while ignoring the discriminant information in the label distributions.
In this work, we focus on the conditional distribution shift problem which is of great concern to current conditional invariant models.
- Score: 14.528711361447712
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a vital problem in classification-oriented transfer, unsupervised domain
adaptation (UDA) has attracted widespread attention in recent years. Previous
UDA methods assume the marginal distributions of different domains are shifted
while ignoring the discriminant information in the label distributions. This
leads to classification performance degeneration in real applications. In this
work, we focus on the conditional distribution shift problem which is of great
concern to current conditional invariant models. We aim to seek a kernel
covariance embedding for conditional distribution which remains yet unexplored.
Theoretically, we propose the Conditional Kernel Bures (CKB) metric for
characterizing conditional distribution discrepancy, and derive an empirical
estimation for the CKB metric without introducing the implicit kernel feature
map. It provides an interpretable approach to understand the knowledge transfer
mechanism. The established consistency theory of the empirical estimation
provides a theoretical guarantee for convergence. A conditional distribution
matching network is proposed to learn the conditional invariant and
discriminative features for UDA. Extensive experiments and analysis show the
superiority of our proposed model.
Related papers
- COD: Learning Conditional Invariant Representation for Domain Adaptation Regression [20.676363400841495]
Domain Adaptation Regression is developed to generalize label knowledge from a source domain to an unlabeled target domain.
Existing conditional distribution alignment theory and methods with discrete prior are no longer applicable.
To minimize the discrepancy, a COD-based conditional invariant representation learning model is proposed.
arXiv Detail & Related papers (2024-08-13T05:08:13Z) - Domain Adaptation with Cauchy-Schwarz Divergence [39.36943882475589]
We introduce Cauchy-Schwarz divergence to the problem of unsupervised domain adaptation (UDA)
The CS divergence offers a theoretically tighter generalization error bound than the popular Kullback-Leibler divergence.
We show how the CS divergence can be conveniently used in both distance metric- or adversarial training-based UDA frameworks.
arXiv Detail & Related papers (2024-05-30T12:01:12Z) - Harnessing the Power of Vicinity-Informed Analysis for Classification under Covariate Shift [9.530897053573186]
Transfer learning enhances prediction accuracy on a target distribution by leveraging data from a source distribution.
This paper introduces a novel dissimilarity measure that utilizes vicinity information, i.e., the local structure of data points.
We characterize the excess error using the proposed measure and demonstrate faster or competitive convergence rates compared to previous techniques.
arXiv Detail & Related papers (2024-05-27T07:55:27Z) - Proxy Methods for Domain Adaptation [78.03254010884783]
proxy variables allow for adaptation to distribution shift without explicitly recovering or modeling latent variables.
We develop a two-stage kernel estimation approach to adapt to complex distribution shifts in both settings.
arXiv Detail & Related papers (2024-03-12T09:32:41Z) - Learning Unbiased Transferability for Domain Adaptation by Uncertainty
Modeling [107.24387363079629]
Domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled or a less labeled but related target domain.
Due to the imbalance between the amount of annotated data in the source and target domains, only the target distribution is aligned to the source domain.
We propose a non-intrusive Unbiased Transferability Estimation Plug-in (UTEP) by modeling the uncertainty of a discriminator in adversarial-based DA methods to optimize unbiased transfer.
arXiv Detail & Related papers (2022-06-02T21:58:54Z) - Maximizing Conditional Independence for Unsupervised Domain Adaptation [9.533515002375545]
We study how to transfer a learner from a labeled source domain to an unlabeled target domain with different distributions.
In addition to unsupervised domain adaptation, we extend our method to the multi-source scenario in a natural and elegant way.
arXiv Detail & Related papers (2022-03-07T08:59:21Z) - Certainty Volume Prediction for Unsupervised Domain Adaptation [35.984559137218504]
Unsupervised domain adaptation (UDA) deals with the problem of classifying unlabeled target domain data.
We propose a novel uncertainty-aware domain adaptation setup that models uncertainty as a multivariate Gaussian distribution in feature space.
We evaluate our proposed pipeline on challenging UDA datasets and achieve state-of-the-art results.
arXiv Detail & Related papers (2021-11-03T11:22:55Z) - Instrumental Variable-Driven Domain Generalization with Unobserved
Confounders [53.735614014067394]
Domain generalization (DG) aims to learn from multiple source domains a model that can generalize well on unseen target domains.
We propose an instrumental variable-driven DG method (IV-DG) by removing the bias of the unobserved confounders with two-stage learning.
In the first stage, it learns the conditional distribution of the input features of one domain given input features of another domain.
In the second stage, it estimates the relationship by predicting labels with the learned conditional distribution.
arXiv Detail & Related papers (2021-10-04T13:32:57Z) - Decentralized Local Stochastic Extra-Gradient for Variational
Inequalities [125.62877849447729]
We consider distributed variational inequalities (VIs) on domains with the problem data that is heterogeneous (non-IID) and distributed across many devices.
We make a very general assumption on the computational network that covers the settings of fully decentralized calculations.
We theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone settings.
arXiv Detail & Related papers (2021-06-15T17:45:51Z) - A Unified Joint Maximum Mean Discrepancy for Domain Adaptation [73.44809425486767]
This paper theoretically derives a unified form of JMMD that is easy to optimize.
From the revealed unified JMMD, we illustrate that JMMD degrades the feature-label dependence that benefits to classification.
We propose a novel MMD matrix to promote the dependence, and devise a novel label kernel that is robust to label distribution shift.
arXiv Detail & Related papers (2021-01-25T09:46:14Z) - Few-shot Domain Adaptation by Causal Mechanism Transfer [107.08605582020866]
We study few-shot supervised domain adaptation (DA) for regression problems, where only a few labeled target domain data and many labeled source domain data are available.
Many of the current DA methods base their transfer assumptions on either parametrized distribution shift or apparent distribution similarities.
We propose mechanism transfer, a meta-distributional scenario in which a data generating mechanism is invariant among domains.
arXiv Detail & Related papers (2020-02-10T02:16:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.