Autonomous Source Knowledge Selection in Multi-Domain Adaptation
- URL: http://arxiv.org/abs/2512.14710v1
- Date: Mon, 08 Dec 2025 07:04:14 GMT
- Title: Autonomous Source Knowledge Selection in Multi-Domain Adaptation
- Authors: Keqiuyin Li, Jie Lu, Hua Zuo, Guangquan Zhang,
- Abstract summary: We propose a multi-domain adaptation method named underlinetextitAutonomous Source Knowledge underlinetextitSelection (AutoS)<n>The proposed method employs a density-driven selection strategy to choose source samples during training and to determine which source models should contribute to target prediction.<n>Simulteneously, a pseudo-label enhancement module built on a pre-trained multimodal modal is employed to mitigate target label noise and improve self-supervision.
- Score: 24.68167440089257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised multi-domain adaptation plays a key role in transfer learning by leveraging acquired rich source information from multiple source domains to solve target task from an unlabeled target domain. However, multiple source domains often contain much redundant or unrelated information which can harm transfer performance, especially when in massive-source domain settings. It is urgent to develop effective strategies for identifying and selecting the most transferable knowledge from massive source domains to address the target task. In this paper, we propose a multi-domain adaptation method named \underline{\textit{Auto}}nomous Source Knowledge \underline{\textit{S}}election (AutoS) to autonomosly select source training samples and models, enabling the prediction of target task using more relevant and transferable source information. The proposed method employs a density-driven selection strategy to choose source samples during training and to determine which source models should contribute to target prediction. Simulteneously, a pseudo-label enhancement module built on a pre-trained multimodal modal is employed to mitigate target label noise and improve self-supervision. Experiments on real-world datasets indicate the superiority of the proposed method.
Related papers
- Progressive Multi-Source Domain Adaptation for Personalized Facial Expression Recognition [64.37805399216347]
Personalized facial expression recognition (FER) involves adapting a machine learning model using samples from labeled sources and unlabeled target domains.<n>We propose a progressive MSDA approach that gradually introduces information from subjects based on their similarity to the target subject.
arXiv Detail & Related papers (2025-04-05T19:14:51Z) - Revisiting the Domain Shift and Sample Uncertainty in Multi-source
Active Domain Transfer [69.82229895838577]
Active Domain Adaptation (ADA) aims to maximally boost model adaptation in a new target domain by actively selecting a limited number of target data to annotate.
This setting neglects the more practical scenario where training data are collected from multiple sources.
This motivates us to target a new and challenging setting of knowledge transfer that extends ADA from a single source domain to multiple source domains.
arXiv Detail & Related papers (2023-11-21T13:12:21Z) - Noisy Universal Domain Adaptation via Divergence Optimization for Visual
Recognition [30.31153237003218]
A novel scenario named Noisy UniDA is proposed to transfer knowledge from a labeled source domain to an unlabeled target domain.
A multi-head convolutional neural network framework is proposed to address all of the challenges faced in the Noisy UniDA at once.
arXiv Detail & Related papers (2023-04-20T14:18:38Z) - RAIN: RegulArization on Input and Network for Black-Box Domain
Adaptation [80.03883315743715]
Source-free domain adaptation transits the source-trained model towards target domain without exposing the source data.
This paradigm is still at risk of data leakage due to adversarial attacks on the source model.
We propose a novel approach named RAIN (RegulArization on Input and Network) for Black-Box domain adaptation from both input-level and network-level regularization.
arXiv Detail & Related papers (2022-08-22T18:18:47Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Multi-Source Domain Adaptation for Object Detection [52.87890831055648]
We propose a unified Faster R-CNN based framework, termed Divide-and-Merge Spindle Network (DMSN)
DMSN can simultaneously enhance domain innative and preserve discriminative power.
We develop a novel pseudo learning algorithm to approximate optimal parameters of pseudo target subset.
arXiv Detail & Related papers (2021-06-30T03:17:20Z) - Curriculum Manager for Source Selection in Multi-Source Domain
Adaptation [65.22251010276652]
We propose an adversarial agent that learns a dynamic curriculum for source samples, called Curriculum Manager for Source Selection (CMSS)
CMSS does not require any knowledge of the domain labels, yet it outperforms other methods on four well-known benchmarks by significant margins.
arXiv Detail & Related papers (2020-07-02T17:15:01Z) - Multi-source Attention for Unsupervised Domain Adaptation [15.900069711477542]
We model source-selection as an attention-learning problem, where we learn attention over sources for a given target instance.
For this purpose, we first independently learn source-specific classification models, and a relatedness map between sources and target domains.
We then learn attention-weights over the sources for aggregating the predictions of the source-specific models.
arXiv Detail & Related papers (2020-04-14T15:51:02Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.