Multi-source Heterogeneous Domain Adaptation with Conditional Weighting
Adversarial Network
- URL: http://arxiv.org/abs/2008.02714v2
- Date: Sun, 12 Sep 2021 09:45:48 GMT
- Title: Multi-source Heterogeneous Domain Adaptation with Conditional Weighting
Adversarial Network
- Authors: Yuan Yao, Xutao Li, Yu Zhang, and Yunming Ye
- Abstract summary: Heterogeneous domain adaptation (HDA) tackles the learning of cross-domain samples with both different probability distributions and feature representations.
In this article, we study the multisource HDA problem and propose a conditional weighting adversarial network (CWAN) to address it.
- Score: 18.168373838347126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Heterogeneous domain adaptation (HDA) tackles the learning of cross-domain
samples with both different probability distributions and feature
representations. Most of the existing HDA studies focus on the single-source
scenario. In reality, however, it is not uncommon to obtain samples from
multiple heterogeneous domains. In this article, we study the multisource HDA
problem and propose a conditional weighting adversarial network (CWAN) to
address it. The proposed CWAN adversarially learns a feature transformer, a
label classifier, and a domain discriminator. To quantify the importance of
different source domains, CWAN introduces a sophisticated conditional weighting
scheme to calculate the weights of the source domains according to the
conditional distribution divergence between the source and target domains.
Different from existing weighting schemes, the proposed conditional weighting
scheme not only weights the source domains but also implicitly aligns the
conditional distributions during the optimization process. Experimental results
clearly demonstrate that the proposed CWAN performs much better than several
state-of-the-art methods on four real-world datasets.
Related papers
- Semi Supervised Heterogeneous Domain Adaptation via Disentanglement and Pseudo-Labelling [4.33404822906643]
Semi-supervised domain adaptation methods leverage information from a source labelled domain to generalize over a scarcely labelled target domain.
Such a setting is denoted as Semi-Supervised Heterogeneous Domain Adaptation (SSHDA)
We introduce SHeDD (Semi-supervised Heterogeneous Domain Adaptation via Disentanglement) an end-to-end neural framework tailored to learning a target domain.
arXiv Detail & Related papers (2024-06-20T08:02:49Z) - Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - Adaptive Domain Generalization via Online Disagreement Minimization [17.215683606365445]
Domain Generalization aims to safely transfer a model to unseen target domains.
AdaODM adaptively modifies the source model at test time for different target domains.
Results show AdaODM stably improves the generalization capacity on unseen domains.
arXiv Detail & Related papers (2022-08-03T11:51:11Z) - Generalizing to Unseen Domains with Wasserstein Distributional Robustness under Limited Source Knowledge [22.285156929279207]
Domain generalization aims at learning a universal model that performs well on unseen target domains.
We propose a novel domain generalization framework called Wasserstein Distributionally Robust Domain Generalization (WDRDG)
arXiv Detail & Related papers (2022-07-11T14:46:50Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Maximizing Conditional Independence for Unsupervised Domain Adaptation [9.533515002375545]
We study how to transfer a learner from a labeled source domain to an unlabeled target domain with different distributions.
In addition to unsupervised domain adaptation, we extend our method to the multi-source scenario in a natural and elegant way.
arXiv Detail & Related papers (2022-03-07T08:59:21Z) - Aggregating From Multiple Target-Shifted Sources [7.644958631142882]
Multi-source domain adaptation aims at leveraging the knowledge from multiple tasks for predicting a related target domain.
In this paper, we analyzed the problem for aggregating source domains with different label distributions, where most recent source selection approaches fail.
Our proposed algorithm differs from previous approaches in two key ways: the model aggregates multiple sources mainly through the similarity of semantic conditional distribution rather than marginal distribution.
arXiv Detail & Related papers (2021-05-09T23:25:29Z) - Heuristic Domain Adaptation [105.59792285047536]
Heuristic Domain Adaptation Network (HDAN) explicitly learns the domain-invariant and domain-specific representations.
Heuristic Domain Adaptation Network (HDAN) has exceeded state-of-the-art on unsupervised DA, multi-source DA and semi-supervised DA.
arXiv Detail & Related papers (2020-11-30T04:21:35Z) - Adversarial Dual Distinct Classifiers for Unsupervised Domain Adaptation [67.83872616307008]
Unversarial Domain adaptation (UDA) attempts to recognize the unlabeled target samples by building a learning model from a differently-distributed labeled source domain.
In this paper, we propose a novel Adrial Dual Distincts Network (AD$2$CN) to align the source and target domain data distribution simultaneously with matching task-specific category boundaries.
To be specific, a domain-invariant feature generator is exploited to embed the source and target data into a latent common space with the guidance of discriminative cross-domain alignment.
arXiv Detail & Related papers (2020-08-27T01:29:10Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Mutual Learning Network for Multi-Source Domain Adaptation [73.25974539191553]
We propose a novel multi-source domain adaptation method, Mutual Learning Network for Multiple Source Domain Adaptation (ML-MSDA)
Under the framework of mutual learning, the proposed method pairs the target domain with each single source domain to train a conditional adversarial domain adaptation network as a branch network.
The proposed method outperforms the comparison methods and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-03-29T04:31:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.