Maximizing Conditional Independence for Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2203.03212v1
- Date: Mon, 7 Mar 2022 08:59:21 GMT
- Title: Maximizing Conditional Independence for Unsupervised Domain Adaptation
- Authors: Yi-Ming Zhai, You-Wei Luo
- Abstract summary: We study how to transfer a learner from a labeled source domain to an unlabeled target domain with different distributions.
In addition to unsupervised domain adaptation, we extend our method to the multi-source scenario in a natural and elegant way.
- Score: 9.533515002375545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation studies how to transfer a learner from a
labeled source domain to an unlabeled target domain with different
distributions. Existing methods mainly focus on matching the marginal
distributions of the source and target domains, which probably lead a
misalignment of samples from the same class but different domains. In this
paper, we deal with this misalignment by achieving the class-conditioned
transferring from a new perspective. We aim to maximize the conditional
independence of feature and domain given class in the reproducing kernel
Hilbert space. The optimization of the conditional independence measure can be
viewed as minimizing a surrogate of a certain mutual information between
feature and domain. An interpretable empirical estimation of the conditional
dependence is deduced and connected with the unconditional case. Besides, we
provide an upper bound on the target error by taking the class-conditional
distribution into account, which provides a new theoretical insight for most
class-conditioned transferring methods. In addition to unsupervised domain
adaptation, we extend our method to the multi-source scenario in a natural and
elegant way. Extensive experiments on four benchmarks validate the
effectiveness of the proposed models in both unsupervised domain adaptation and
multiple source domain adaptation.
Related papers
- Conditional Support Alignment for Domain Adaptation with Label Shift [8.819673391477034]
Unlabelled domain adaptation (UDA) refers to a domain adaptation framework in which a learning model is trained based on labeled samples on the source domain and unsupervised ones in the target domain.
We propose a novel conditional adversarial support alignment (CASA) whose aim is to minimize the conditional symmetric support divergence between the source's and target domain's feature representation distributions.
arXiv Detail & Related papers (2023-05-29T05:20:18Z) - Domain Adaptation via Rebalanced Sub-domain Alignment [22.68115322836635]
Unsupervised domain adaptation (UDA) is a technique used to transfer knowledge from a labeled source domain to a related unlabeled target domain.
Many UDA methods have shown success in the past, but they often assume that the source and target domains must have identical class label distributions.
We propose a novel generalization bound that reweights source classification error by aligning source and target sub-domains.
arXiv Detail & Related papers (2023-02-03T21:30:40Z) - Constrained Maximum Cross-Domain Likelihood for Domain Generalization [14.91361835243516]
Domain generalization aims to learn a generalizable model on multiple source domains, which is expected to perform well on unseen test domains.
In this paper, we propose a novel domain generalization method, which minimizes the KL-divergence between posterior distributions from different domains.
Experiments on four standard benchmark datasets, i.e., Digits-DG, PACS, Office-Home and miniDomainNet, highlight the superior performance of our method.
arXiv Detail & Related papers (2022-10-09T03:41:02Z) - Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - Generalizing to Unseen Domains with Wasserstein Distributional Robustness under Limited Source Knowledge [22.285156929279207]
Domain generalization aims at learning a universal model that performs well on unseen target domains.
We propose a novel domain generalization framework called Wasserstein Distributionally Robust Domain Generalization (WDRDG)
arXiv Detail & Related papers (2022-07-11T14:46:50Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - From Big to Small: Adaptive Learning to Partial-Set Domains [94.92635970450578]
Domain adaptation targets at knowledge acquisition and dissemination from a labeled source domain to an unlabeled target domain under distribution shift.
Recent advances show that deep pre-trained models of large scale endow rich knowledge to tackle diverse downstream tasks of small scale.
This paper introduces Partial Domain Adaptation (PDA), a learning paradigm that relaxes the identical class space assumption to that the source class space subsumes the target class space.
arXiv Detail & Related papers (2022-03-14T07:02:45Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Domain Adaptation and Image Classification via Deep Conditional
Adaptation Network [26.09932710494144]
Unsupervised domain adaptation aims to generalize the supervised model trained on a source domain to an unlabeled target domain.
Marginal distribution alignment of feature spaces is widely used to reduce the domain discrepancy between the source and target domains.
We propose a novel unsupervised domain adaptation method, Deep Conditional Adaptation Network (DCAN), based on conditional distribution alignment of feature spaces.
arXiv Detail & Related papers (2020-06-14T02:56:01Z) - A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation [142.31610972922067]
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain.
We build on domain adversarial learning and propose a novel domain adaptation method BA$3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS)
Experimental results on multiple benchmarks demonstrate our BA$3$US surpasses state-of-the-arts for partial domain adaptation tasks.
arXiv Detail & Related papers (2020-03-05T11:37:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.