Learning to Combine: Knowledge Aggregation for Multi-Source Domain
Adaptation
- URL: http://arxiv.org/abs/2007.08801v3
- Date: Tue, 28 Jul 2020 15:12:38 GMT
- Title: Learning to Combine: Knowledge Aggregation for Multi-Source Domain
Adaptation
- Authors: Hang Wang, Minghao Xu, Bingbing Ni, Wenjun Zhang
- Abstract summary: We propose a Learning to Combine for Multi-Source Domain Adaptation (LtC-MSDA) framework.
In the nutshell, a knowledge graph is constructed on the prototypes of various domains to realize the information propagation among semantically adjacent representations.
Our approach outperforms existing methods with a remarkable margin.
- Score: 56.694330303488435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transferring knowledges learned from multiple source domains to target domain
is a more practical and challenging task than conventional single-source domain
adaptation. Furthermore, the increase of modalities brings more difficulty in
aligning feature distributions among multiple domains. To mitigate these
problems, we propose a Learning to Combine for Multi-Source Domain Adaptation
(LtC-MSDA) framework via exploring interactions among domains. In the nutshell,
a knowledge graph is constructed on the prototypes of various domains to
realize the information propagation among semantically adjacent
representations. On such basis, a graph model is learned to predict query
samples under the guidance of correlated prototypes. In addition, we design a
Relation Alignment Loss (RAL) to facilitate the consistency of categories'
relational interdependency and the compactness of features, which boosts
features' intra-class invariance and inter-class separability. Comprehensive
results on public benchmark datasets demonstrate that our approach outperforms
existing methods with a remarkable margin. Our code is available at
\url{https://github.com/ChrisAllenMing/LtC-MSDA}
Related papers
- Domain Expansion and Boundary Growth for Open-Set Single-Source Domain Generalization [70.02187124865627]
Open-set single-source domain generalization aims to use a single-source domain to learn a robust model that can be generalized to unknown target domains.
We propose a novel learning approach based on domain expansion and boundary growth to expand the scarce source samples.
Our approach can achieve significant improvements and reach state-of-the-art performance on several cross-domain image classification datasets.
arXiv Detail & Related papers (2024-11-05T09:08:46Z) - Improving Intrusion Detection with Domain-Invariant Representation Learning in Latent Space [4.871119861180455]
We introduce a two-phase representation learning technique using multi-task learning.
We disentangle the latent space by minimizing the mutual information between the prior and latent space.
We assess the model's efficacy across multiple cybersecurity datasets.
arXiv Detail & Related papers (2023-12-28T17:24:13Z) - MLNet: Mutual Learning Network with Neighborhood Invariance for
Universal Domain Adaptation [70.62860473259444]
Universal domain adaptation (UniDA) is a practical but challenging problem.
Existing UniDA methods may suffer from the problems of overlooking intra-domain variations in the target domain.
We propose a novel Mutual Learning Network (MLNet) with neighborhood invariance for UniDA.
arXiv Detail & Related papers (2023-12-13T03:17:34Z) - Domain Attention Consistency for Multi-Source Domain Adaptation [100.25573559447551]
Key design is a feature channel attention module, which aims to identify transferable features (attributes)
Experiments on three MSDA benchmarks show that our DAC-Net achieves new state of the art performance on all of them.
arXiv Detail & Related papers (2021-11-06T15:56:53Z) - Multi-Source domain adaptation via supervised contrastive learning and
confident consistency regularization [0.0]
Multi-Source Unsupervised Domain Adaptation (multi-source UDA) aims to learn a model from several labeled source domains.
We propose Contrastive Multi-Source Domain Adaptation (CMSDA) for multi-source UDA that addresses this limitation.
arXiv Detail & Related papers (2021-06-30T14:39:15Z) - Cluster, Split, Fuse, and Update: Meta-Learning for Open Compound Domain
Adaptive Semantic Segmentation [102.42638795864178]
We propose a principled meta-learning based approach to OCDA for semantic segmentation.
We cluster target domain into multiple sub-target domains by image styles, extracted in an unsupervised manner.
A meta-learner is thereafter deployed to learn to fuse sub-target domain-specific predictions, conditioned upon the style code.
We learn to online update the model by model-agnostic meta-learning (MAML) algorithm, thus to further improve generalization.
arXiv Detail & Related papers (2020-12-15T13:21:54Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.