Automatic Online Multi-Source Domain Adaptation
- URL: http://arxiv.org/abs/2109.01996v1
- Date: Sun, 5 Sep 2021 05:07:16 GMT
- Title: Automatic Online Multi-Source Domain Adaptation
- Authors: Renchunzi Xie, Mahardhika Pratama
- Abstract summary: An online domain adaptation technique under multisource streaming processes, namely automatic online multi-source domain adaptation (AOMSDA) is proposed in this paper.
A numerical study demonstrates that AOMSDA is capable of outperforming its counterparts in 5 of 8 study cases.
- Score: 15.475463516901936
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Knowledge transfer across several streaming processes remain challenging
problem not only because of different distributions of each stream but also
because of rapidly changing and never-ending environments of data streams.
Albeit growing research achievements in this area, most of existing works are
developed for a single source domain which limits its resilience to exploit
multi-source domains being beneficial to recover from concept drifts quickly
and to avoid the negative transfer problem. An online domain adaptation
technique under multisource streaming processes, namely automatic online
multi-source domain adaptation (AOMSDA), is proposed in this paper. The online
domain adaptation strategy of AOMSDA is formulated under a coupled generative
and discriminative approach of denoising autoencoder (DAE) where the central
moment discrepancy (CMD)-based regularizer is integrated to handle the
existence of multi-source domains thereby taking advantage of complementary
information sources. The asynchronous concept drifts taking place at different
time periods are addressed by a self-organizing structure and a node
re-weighting strategy. Our numerical study demonstrates that AOMSDA is capable
of outperforming its counterparts in 5 of 8 study cases while the ablation
study depicts the advantage of each learning component. In addition, AOMSDA is
general for any number of source streams. The source code of AOMSDA is shared
publicly in https://github.com/Renchunzi-Xie/AOMSDA.git.
Related papers
- More is Better: Deep Domain Adaptation with Multiple Sources [34.26271755493111]
Multi-source domain adaptation (MDA) is a powerful and practical extension in which the labeled data may be collected from multiple sources with different distributions.
In this survey, we first define various MDA strategies. Then we systematically summarize and compare modern MDA methods in the deep learning era from different perspectives.
arXiv Detail & Related papers (2024-05-01T03:37:12Z) - Noisy Universal Domain Adaptation via Divergence Optimization for Visual
Recognition [30.31153237003218]
A novel scenario named Noisy UniDA is proposed to transfer knowledge from a labeled source domain to an unlabeled target domain.
A multi-head convolutional neural network framework is proposed to address all of the challenges faced in the Noisy UniDA at once.
arXiv Detail & Related papers (2023-04-20T14:18:38Z) - Multi-Prompt Alignment for Multi-Source Unsupervised Domain Adaptation [86.02485817444216]
We introduce Multi-Prompt Alignment (MPA), a simple yet efficient framework for multi-source UDA.
MPA denoises the learned prompts through an auto-encoding process and aligns them by maximizing the agreement of all the reconstructed prompts.
Experiments show that MPA achieves state-of-the-art results on three popular datasets with an impressive average accuracy of 54.1% on DomainNet.
arXiv Detail & Related papers (2022-09-30T03:40:10Z) - Federated Semi-Supervised Domain Adaptation via Knowledge Transfer [6.7543356061346485]
This paper proposes an innovative approach to achieve semi-supervised domain adaptation (SSDA) over multiple distributed and confidential datasets.
Federated Semi-Supervised Domain Adaptation (FSSDA) integrates SSDA with federated learning based on strategically designed knowledge distillation techniques.
Extensive experiments are conducted to demonstrate the effectiveness and efficiency of FSSDA design.
arXiv Detail & Related papers (2022-07-21T19:36:10Z) - Balancing Discriminability and Transferability for Source-Free Domain
Adaptation [55.143687986324935]
Conventional domain adaptation (DA) techniques aim to improve domain transferability by learning domain-invariant representations.
The requirement of simultaneous access to labeled source and unlabeled target renders them unsuitable for the challenging source-free DA setting.
We derive novel insights to show that a mixup between original and corresponding translated generic samples enhances the discriminability-transferability trade-off.
arXiv Detail & Related papers (2022-06-16T09:06:22Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - Multi-Source domain adaptation via supervised contrastive learning and
confident consistency regularization [0.0]
Multi-Source Unsupervised Domain Adaptation (multi-source UDA) aims to learn a model from several labeled source domains.
We propose Contrastive Multi-Source Domain Adaptation (CMSDA) for multi-source UDA that addresses this limitation.
arXiv Detail & Related papers (2021-06-30T14:39:15Z) - Learning to Combine: Knowledge Aggregation for Multi-Source Domain
Adaptation [56.694330303488435]
We propose a Learning to Combine for Multi-Source Domain Adaptation (LtC-MSDA) framework.
In the nutshell, a knowledge graph is constructed on the prototypes of various domains to realize the information propagation among semantically adjacent representations.
Our approach outperforms existing methods with a remarkable margin.
arXiv Detail & Related papers (2020-07-17T07:52:44Z) - Mutual Learning Network for Multi-Source Domain Adaptation [73.25974539191553]
We propose a novel multi-source domain adaptation method, Mutual Learning Network for Multiple Source Domain Adaptation (ML-MSDA)
Under the framework of mutual learning, the proposed method pairs the target domain with each single source domain to train a conditional adversarial domain adaptation network as a branch network.
The proposed method outperforms the comparison methods and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-03-29T04:31:43Z) - Multi-source Domain Adaptation in the Deep Learning Era: A Systematic
Survey [53.656086832255944]
Multi-source domain adaptation (MDA) is a powerful extension in which the labeled data may be collected from multiple sources.
MDA has attracted increasing attention in both academia and industry.
arXiv Detail & Related papers (2020-02-26T08:07:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.