Prompt-based Distribution Alignment for Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2312.09553v2
- Date: Fri, 26 Jan 2024 16:31:41 GMT
- Title: Prompt-based Distribution Alignment for Unsupervised Domain Adaptation
- Authors: Shuanghao Bai, Min Zhang, Wanqi Zhou, Siteng Huang, Zhirong Luan,
Donglin Wang and Badong Chen
- Abstract summary: We experimentally demonstrate that the unsupervised-trained visual-language models (VLMs) can significantly reduce the distribution discrepancy between source and target domains.
A major challenge for directly deploying such models on downstream UDA tasks is prompt engineering.
We propose a Prompt-based Distribution Alignment (PDA) method to incorporate the domain knowledge into prompt learning.
- Score: 42.77798810726824
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, despite the unprecedented success of large pre-trained
visual-language models (VLMs) on a wide range of downstream tasks, the
real-world unsupervised domain adaptation (UDA) problem is still not well
explored. Therefore, in this paper, we first experimentally demonstrate that
the unsupervised-trained VLMs can significantly reduce the distribution
discrepancy between source and target domains, thereby improving the
performance of UDA. However, a major challenge for directly deploying such
models on downstream UDA tasks is prompt engineering, which requires aligning
the domain knowledge of source and target domains, since the performance of UDA
is severely influenced by a good domain-invariant representation. We further
propose a Prompt-based Distribution Alignment (PDA) method to incorporate the
domain knowledge into prompt learning. Specifically, PDA employs a two-branch
prompt-tuning paradigm, namely base branch and alignment branch. The base
branch focuses on integrating class-related representation into prompts,
ensuring discrimination among different classes. To further minimize domain
discrepancy, for the alignment branch, we construct feature banks for both the
source and target domains and propose image-guided feature tuning (IFT) to make
the input attend to feature banks, which effectively integrates self-enhanced
and cross-domain features into the model. In this way, these two branches can
be mutually promoted to enhance the adaptation of VLMs for UDA. We conduct
extensive experiments on three benchmarks to demonstrate that our proposed PDA
achieves state-of-the-art performance. The code is available at
https://github.com/BaiShuanghao/Prompt-based-Distribution-Alignment.
Related papers
- Enhancing Domain Adaptation through Prompt Gradient Alignment [16.618313165111793]
We develop a line of works based on prompt learning to learn both domain-invariant and specific features.
We cast UDA as a multiple-objective optimization problem in which each objective is represented by a domain loss.
Our method consistently surpasses other prompt-based baselines by a large margin on different UDA benchmarks.
arXiv Detail & Related papers (2024-06-13T17:40:15Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - Increasing Model Generalizability for Unsupervised Domain Adaptation [12.013345715187285]
We show that increasing the interclass margins in the embedding space can help to develop a UDA algorithm with improved performance.
We demonstrate that using our approach leads to improved model generalizability on four standard benchmark UDA image classification datasets.
arXiv Detail & Related papers (2022-09-29T09:08:04Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - A New Bidirectional Unsupervised Domain Adaptation Segmentation
Framework [27.13101555533594]
unsupervised domain adaptation (UDA) techniques are proposed to bridge the gap between different domains.
In this paper, we propose a bidirectional UDA framework based on disentangled representation learning for equally competent two-way UDA performances.
arXiv Detail & Related papers (2021-08-18T05:25:11Z) - Towards Fair Knowledge Transfer for Imbalanced Domain Adaptation [61.317911756566126]
We propose a Towards Fair Knowledge Transfer framework to handle the fairness challenge in imbalanced cross-domain learning.
Specifically, a novel cross-domain mixup generation is exploited to augment the minority source set with target information to enhance fairness.
Our model significantly improves over 20% on two benchmarks in terms of the overall accuracy.
arXiv Detail & Related papers (2020-10-23T06:29:09Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.