Agile Multi-Source-Free Domain Adaptation
- URL: http://arxiv.org/abs/2403.05062v1
- Date: Fri, 8 Mar 2024 05:17:10 GMT
- Title: Agile Multi-Source-Free Domain Adaptation
- Authors: Xinyao Li, Jingjing Li, Fengling Li, Lei Zhu, Ke Lu
- Abstract summary: Bi-level ATtention ENsemble (Bi-ATEN) module learns both intra-domain weights and inter-domain ensemble weights to achieve a fine balance between instance specificity and domain consistency.
We achieve comparable or even superior performance on a challenging benchmark DomainNet with less than 3% trained parameters and 8 times of throughput compared with SOTA method.
- Score: 25.06352660046911
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efficiently utilizing rich knowledge in pretrained models has become a
critical topic in the era of large models. This work focuses on adaptively
utilizing knowledge from multiple source-pretrained models to an unlabeled
target domain without accessing the source data. Despite being a practically
useful setting, existing methods require extensive parameter tuning over each
source model, which is computationally expensive when facing abundant source
domains or larger source models. To address this challenge, we propose a novel
approach which is free of the parameter tuning over source backbones. Our
technical contribution lies in the Bi-level ATtention ENsemble (Bi-ATEN)
module, which learns both intra-domain weights and inter-domain ensemble
weights to achieve a fine balance between instance specificity and domain
consistency. By slightly tuning source bottlenecks, we achieve comparable or
even superior performance on a challenging benchmark DomainNet with less than
3% trained parameters and 8 times of throughput compared with SOTA method.
Furthermore, with minor modifications, the proposed module can be easily
equipped to existing methods and gain more than 4% performance boost. Code is
available at https://github.com/TL-UESTC/Bi-ATEN.
Related papers
- Gradual Fine-Tuning with Graph Routing for Multi-Source Unsupervised Domain Adaptation [5.125509132300994]
Multi-source unsupervised domain adaptation aims to leverage labeled data from multiple source domains for training a machine learning model.
We introduce a framework for gradual fine tuning (GFT) of machine learning models on multiple source domains.
arXiv Detail & Related papers (2024-11-11T17:59:21Z) - Train Till You Drop: Towards Stable and Robust Source-free Unsupervised 3D Domain Adaptation [62.889835139583965]
We tackle the problem of source-free unsupervised domain adaptation (SFUDA) for 3D semantic segmentation.
It amounts to performing domain adaptation on an unlabeled target domain without any access to source data.
A common issue with existing SFUDA approaches is that performance degrades after some training time.
arXiv Detail & Related papers (2024-09-06T17:13:14Z) - Memory-Efficient Pseudo-Labeling for Online Source-Free Universal Domain Adaptation using a Gaussian Mixture Model [3.1265626879839923]
In practice, domain shifts are likely to occur between training and test data, necessitating domain adaptation (DA) to adjust the pre-trained source model to the target domain.
UniDA has gained attention for addressing the possibility of an additional category (label) shift between the source and target domain.
We propose a novel method that continuously captures the distribution of known classes in the feature space using a Gaussian mixture model (GMM)
Our approach achieves state-of-the-art results in all experiments on the DomainNet and Office-Home datasets.
arXiv Detail & Related papers (2024-07-19T11:13:31Z) - Few-shot Image Generation via Adaptation-Aware Kernel Modulation [33.191479192580275]
Few-shot image generation (F SIG) aims to generate new and diverse samples given an extremely limited number of samples from a domain.
Recent work has addressed the problem using transfer learning approach, leveraging a GAN pretrained on a large-scale source domain dataset.
We propose Adaptation-Aware kernel Modulation (AdAM) to address general F SIG of different source-target domain proximity.
arXiv Detail & Related papers (2022-10-29T10:26:40Z) - RAIN: RegulArization on Input and Network for Black-Box Domain
Adaptation [80.03883315743715]
Source-free domain adaptation transits the source-trained model towards target domain without exposing the source data.
This paradigm is still at risk of data leakage due to adversarial attacks on the source model.
We propose a novel approach named RAIN (RegulArization on Input and Network) for Black-Box domain adaptation from both input-level and network-level regularization.
arXiv Detail & Related papers (2022-08-22T18:18:47Z) - Hyperparameter-free Continuous Learning for Domain Classification in
Natural Language Understanding [60.226644697970116]
Domain classification is the fundamental task in natural language understanding (NLU)
Most existing continual learning approaches suffer from low accuracy and performance fluctuation.
We propose a hyper parameter-free continual learning model for text data that can stably produce high performance under various environments.
arXiv Detail & Related papers (2022-01-05T02:46:16Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z) - Unsupervised Multi-source Domain Adaptation Without Access to Source
Data [58.551861130011886]
Unsupervised Domain Adaptation (UDA) aims to learn a predictor model for an unlabeled domain by transferring knowledge from a separate labeled source domain.
We propose a novel and efficient algorithm which automatically combines the source models with suitable weights in such a way that it performs at least as good as the best source model.
arXiv Detail & Related papers (2021-04-05T10:45:12Z) - Multi-path Neural Networks for On-device Multi-domain Visual
Classification [55.281139434736254]
This paper proposes a novel approach to automatically learn a multi-path network for multi-domain visual classification on mobile devices.
The proposed multi-path network is learned from neural architecture search by applying one reinforcement learning controller for each domain to select the best path in the super-network created from a MobileNetV3-like search space.
The determined multi-path model selectively shares parameters across domains in shared nodes while keeping domain-specific parameters within non-shared nodes in individual domain paths.
arXiv Detail & Related papers (2020-10-10T05:13:49Z) - Online Meta-Learning for Multi-Source and Semi-Supervised Domain
Adaptation [4.1799778475823315]
We propose a framework to enhance performance by meta-learning the initial conditions of existing DA algorithms.
We present variants for both multi-source unsupervised domain adaptation (MSDA), and semi-supervised domain adaptation (SSDA)
We achieve state of the art results on several DA benchmarks including the largest scale DomainNet.
arXiv Detail & Related papers (2020-04-09T07:48:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.