Data-Efficient CLIP-Powered Dual-Branch Networks for Source-Free Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2410.15811v2
- Date: Sat, 26 Oct 2024 09:16:41 GMT
- Title: Data-Efficient CLIP-Powered Dual-Branch Networks for Source-Free Unsupervised Domain Adaptation
- Authors: Yongguang Li, Yueqi Cao, Jindong Li, Qi Wang, Shengsheng Wang,
- Abstract summary: Source-free Unsupervised Domain Adaptation (SF-UDA) aims to transfer a model's performance from a labeled source domain to an unlabeled target domain without direct access to source samples.
We introduce a data-efficient, CLIP-powered dual-branch network (CDBN) to address the dual challenges of limited source data and privacy concerns.
CDBN achieves near state-of-the-art performance with far fewer source domain samples than existing methods across 31 transfer tasks on seven datasets.
- Score: 4.7589762171821715
- License:
- Abstract: Source-free Unsupervised Domain Adaptation (SF-UDA) aims to transfer a model's performance from a labeled source domain to an unlabeled target domain without direct access to source samples, addressing critical data privacy concerns. However, most existing SF-UDA approaches assume the availability of abundant source domain samples, which is often impractical due to the high cost of data annotation. To address the dual challenges of limited source data and privacy concerns, we introduce a data-efficient, CLIP-powered dual-branch network (CDBN). This architecture consists of a cross-domain feature transfer branch and a target-specific feature learning branch, leveraging high-confidence target domain samples to transfer text features of source domain categories while learning target-specific soft prompts. By fusing the outputs of both branches, our approach not only effectively transfers source domain category semantic information to the target domain but also reduces the negative impacts of noise and domain gaps during target training. Furthermore, we propose an unsupervised optimization strategy driven by accurate classification and diversity, preserving the classification capability learned from the source domain while generating more confident and diverse predictions in the target domain. CDBN achieves near state-of-the-art performance with far fewer source domain samples than existing methods across 31 transfer tasks on seven datasets.
Related papers
- Reducing Source-Private Bias in Extreme Universal Domain Adaptation [11.875619863954238]
Universal Domain Adaptation (UniDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We show that state-of-the-art methods struggle when the source domain has significantly more non-overlapping classes than overlapping ones.
We propose using self-supervised learning to preserve the structure of the target data.
arXiv Detail & Related papers (2024-10-15T04:51:37Z) - Contrastive Adversarial Training for Unsupervised Domain Adaptation [2.432037584128226]
Domain adversarial training has been successfully adopted for various domain adaptation tasks.
Large models make adversarial training being easily biased towards source domain and hardly adapted to target domain.
We propose contrastive adversarial training (CAT) approach that leverages the labeled source domain samples to reinforce and regulate the feature generation for target domain.
arXiv Detail & Related papers (2024-07-17T17:59:21Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Domain Adaptive Semantic Segmentation without Source Data [50.18389578589789]
We investigate domain adaptive semantic segmentation without source data, which assumes that the model is pre-trained on the source domain.
We propose an effective framework for this challenging problem with two components: positive learning and negative learning.
Our framework can be easily implemented and incorporated with other methods to further enhance the performance.
arXiv Detail & Related papers (2021-10-13T04:12:27Z) - Joint Distribution Alignment via Adversarial Learning for Domain
Adaptive Object Detection [11.262560426527818]
Unsupervised domain adaptive object detection aims to adapt a well-trained detector from its original source domain with rich labeled data to a new target domain with unlabeled data.
Recently, mainstream approaches perform this task through adversarial learning, yet still suffer from two limitations.
We propose a joint adaptive detection framework (JADF) to address the above challenges.
arXiv Detail & Related papers (2021-09-19T00:27:08Z) - Preserving Semantic Consistency in Unsupervised Domain Adaptation Using
Generative Adversarial Networks [33.84004077585957]
We propose an end-to-end novel semantic consistent generative adversarial network (SCGAN)
This network can achieve source to target domain matching by capturing semantic information at the feature level.
We demonstrate the robustness of our proposed method which exceeds the state-of-the-art performance in unsupervised domain adaptation settings.
arXiv Detail & Related papers (2021-04-28T12:23:30Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Disentanglement-based Cross-Domain Feature Augmentation for Effective
Unsupervised Domain Adaptive Person Re-identification [87.72851934197936]
Unsupervised domain adaptive (UDA) person re-identification (ReID) aims to transfer the knowledge from the labeled source domain to the unlabeled target domain for person matching.
One challenge is how to generate target domain samples with reliable labels for training.
We propose a Disentanglement-based Cross-Domain Feature Augmentation strategy.
arXiv Detail & Related papers (2021-03-25T15:28:41Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Adversarial Training Based Multi-Source Unsupervised Domain Adaptation
for Sentiment Analysis [19.05317868659781]
We propose two transfer learning frameworks based on the multi-source domain adaptation methodology for sentiment analysis.
The first framework is a novel Weighting Scheme based Unsupervised Domain Adaptation framework (WS-UDA), which combine the source classifiers to acquire pseudo labels for target instances.
The second framework is a Two-Stage Training based Unsupervised Domain Adaptation framework (2ST-UDA), which further exploits these pseudo labels to train a target private extractor.
arXiv Detail & Related papers (2020-06-10T01:41:00Z) - Deep Residual Correction Network for Partial Domain Adaptation [79.27753273651747]
Deep domain adaptation methods have achieved appealing performance by learning transferable representations from a well-labeled source domain to a different but related unlabeled target domain.
This paper proposes an efficiently-implemented Deep Residual Correction Network (DRCN)
Comprehensive experiments on partial, traditional and fine-grained cross-domain visual recognition demonstrate that DRCN is superior to the competitive deep domain adaptation approaches.
arXiv Detail & Related papers (2020-04-10T06:07:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.