C-DGPA: Class-Centric Dual-Alignment Generative Prompt Adaptation
- URL: http://arxiv.org/abs/2512.16164v1
- Date: Thu, 18 Dec 2025 04:30:53 GMT
- Title: C-DGPA: Class-Centric Dual-Alignment Generative Prompt Adaptation
- Authors: Chao Li, Dasha Hu, Chengyang Li, Yuming Jiang, Yuncheng Shen,
- Abstract summary: Unsupervised Domain Adaptation transfers knowledge from a labeled source domain to an unlabeled target domain.<n>Existing prompt-tuning strategies primarily align marginal distribution discrepancies.<n>C-DGPA integrates domain knowl edge into prompt learning via synergistic optimization.<n>It achieves new state-of-the-art results on all benchmarks.
- Score: 8.824565305964406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised Domain Adaptation transfers knowledge from a labeled source domain to an unlabeled target domain. Directly deploying Vision-Language Models (VLMs) with prompt tuning in downstream UDA tasks faces the signifi cant challenge of mitigating domain discrepancies. Existing prompt-tuning strategies primarily align marginal distribu tion, but neglect conditional distribution discrepancies, lead ing to critical issues such as class prototype misalignment and degraded semantic discriminability. To address these lim itations, the work proposes C-DGPA: Class-Centric Dual Alignment Generative Prompt Adaptation. C-DGPA syner gistically optimizes marginal distribution alignment and con ditional distribution alignment through a novel dual-branch architecture. The marginal distribution alignment branch em ploys a dynamic adversarial training framework to bridge marginal distribution discrepancies. Simultaneously, the con ditional distribution alignment branch introduces a Class Mapping Mechanism (CMM) to align conditional distribu tion discrepancies by standardizing semantic prompt under standing and preventing source domain over-reliance. This dual alignment strategy effectively integrates domain knowl edge into prompt learning via synergistic optimization, ensur ing domain-invariant and semantically discriminative repre sentations. Extensive experiments on OfficeHome, Office31, and VisDA-2017 validate the superiority of C-DGPA. It achieves new state-of-the-art results on all benchmarks.
Related papers
- Guidance Not Obstruction: A Conjugate Consistent Enhanced Strategy for Domain Generalization [50.04665252665413]
We argue that acquiring discriminative generalization between classes within domains is crucial.<n>In contrast to seeking distribution alignment, we endeavor to safeguard domain-related between-class discrimination.<n>We employ a novel distribution-level Universum strategy to generate supplementary diverse domain-related class-conditional distributions.
arXiv Detail & Related papers (2024-12-13T12:25:16Z) - Prompt-based Distribution Alignment for Unsupervised Domain Adaptation [42.77798810726824]
We experimentally demonstrate that the unsupervised-trained visual-language models (VLMs) can significantly reduce the distribution discrepancy between source and target domains.
A major challenge for directly deploying such models on downstream UDA tasks is prompt engineering.
We propose a Prompt-based Distribution Alignment (PDA) method to incorporate the domain knowledge into prompt learning.
arXiv Detail & Related papers (2023-12-15T06:15:04Z) - Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - CASUAL: Conditional Support Alignment for Domain Adaptation with Label Shift [9.2929174544214]
Unsupervised domain adaptation (UDA) refers to a domain adaptation framework in which a learning model is trained based on the labeled samples on the source domain and unlabeled ones in the target domain.<n>We propose a novel Conditional Adversarial SUpport ALignment (CASUAL) whose aim is to minimize the conditional symmetric support divergence between the source's and target domain's feature representation distributions.
arXiv Detail & Related papers (2023-05-29T05:20:18Z) - CDA: Contrastive-adversarial Domain Adaptation [11.354043674822451]
We propose a two-stage model for domain adaptation called textbfContrastive-adversarial textbfDomain textbfAdaptation textbf(CDA).
While the adversarial component facilitates domain-level alignment, two-stage contrastive learning exploits class information to achieve higher intra-class compactness across domains.
arXiv Detail & Related papers (2023-01-10T07:43:21Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - HSVA: Hierarchical Semantic-Visual Adaptation for Zero-Shot Learning [74.76431541169342]
Zero-shot learning (ZSL) tackles the unseen class recognition problem, transferring semantic knowledge from seen classes to unseen ones.
We propose a novel hierarchical semantic-visual adaptation (HSVA) framework to align semantic and visual domains.
Experiments on four benchmark datasets demonstrate HSVA achieves superior performance on both conventional and generalized ZSL.
arXiv Detail & Related papers (2021-09-30T14:27:50Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z) - Discriminative Feature Alignment: Improving Transferability of
Unsupervised Domain Adaptation by Gaussian-guided Latent Alignment [27.671964294233756]
In this study, we focus on the unsupervised domain adaptation problem where an approximate inference model is to be learned from a labeled data domain.
The success of unsupervised domain adaptation largely relies on the cross-domain feature alignment.
We introduce a Gaussian-guided latent alignment approach to align the latent feature distributions of the two domains under the guidance of the prior distribution.
In such an indirect way, the distributions over the samples from the two domains will be constructed on a common feature space, i.e., the space of the prior.
arXiv Detail & Related papers (2020-06-23T05:33:54Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.