Federated Domain Generalization with Domain-specific Soft Prompts Generation
- URL: http://arxiv.org/abs/2509.20807v1
- Date: Thu, 25 Sep 2025 06:41:48 GMT
- Title: Federated Domain Generalization with Domain-specific Soft Prompts Generation
- Authors: Jianhan Wu, Xiaoyang Qu, Zhangcheng Huang, Jianzong Wang,
- Abstract summary: We propose a novel and effective method from a generative perspective for handling federated domain generalization tasks.<n>Specifically, during training, we introduce domain-specific soft prompts (DSPs) for each domain and integrate content and domain knowledge into the generative model.<n>In the inference phase, the generator is utilized to obtain DSPs for unseen target domains, thus guiding downstream tasks in unknown domains.
- Score: 34.51919138862278
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prompt learning has become an efficient paradigm for adapting CLIP to downstream tasks. Compared with traditional fine-tuning, prompt learning optimizes a few parameters yet yields highly competitive results, especially appealing in federated learning for computational efficiency. engendering domain shift among clients and posing a formidable challenge for downstream-task adaptation. Existing federated domain generalization (FDG) methods based on prompt learning typically learn soft prompts from training samples, replacing manually designed prompts to enhance the generalization ability of federated models. However, these learned prompts exhibit limited diversity and tend to ignore information from unknown domains. We propose a novel and effective method from a generative perspective for handling FDG tasks, namely federated domain generalization with domain-specific soft prompts generation (FedDSPG). Specifically, during training, we introduce domain-specific soft prompts (DSPs) for each domain and integrate content and domain knowledge into the generative model among clients. In the inference phase, the generator is utilized to obtain DSPs for unseen target domains, thus guiding downstream tasks in unknown domains. Comprehensive evaluations across several public datasets confirm that our method outperforms existing strong baselines in FDG, achieving state-of-the-art results.
Related papers
- Domain Generalizable Continual Learning [24.99964797750464]
We introduce a novel and realistic setting named domain generalizable continual learning (DGCL)<n>A model learns sequential tasks with each involving a single domain, aiming to perform well across all encountered tasks and domains.<n>This setting poses unique challenges in acquiring, retaining, and leveraging both semantic- and domain-relevant information for robust generalization.<n>We propose adaptive Domain Transformation (DoT), an innovative PTMs-based approach tailored to DGCL.
arXiv Detail & Related papers (2025-10-19T16:16:20Z) - Text-Driven Causal Representation Learning for Source-Free Domain Generalization [82.75041792888274]
We propose TDCRL, the first method to integrate causal inference into the source-free domain generalization setting.<n>Our approach offers a clear and effective mechanism to achieve robust, domain-invariant features, ensuring robust generalization.
arXiv Detail & Related papers (2025-07-14T06:20:42Z) - Advancing Open-Set Domain Generalization Using Evidential Bi-Level Hardest Domain Scheduler [45.71475375161575]
In Open-Set Domain Generalization, the model is exposed to both new variations of data appearance (domains) and open-set conditions.
We propose the Evidential Bi-Level Hardest Domain Scheduler (EBiL-HaDS) to achieve an adaptive domain scheduler.
arXiv Detail & Related papers (2024-09-26T05:57:35Z) - Soft Prompt Generation for Domain Generalization [13.957351735394683]
Large pre-trained vision language models (VLMs) have shown impressive zero-shot ability on downstream tasks with manually designed prompt.
To further adapt VLMs to downstream tasks, soft prompt is proposed to replace manually designed prompt.
arXiv Detail & Related papers (2024-04-30T06:33:07Z) - DiPrompT: Disentangled Prompt Tuning for Multiple Latent Domain
Generalization in Federated Learning [20.51179258856028]
Federated learning (FL) has emerged as a powerful paradigm for learning from decentralized data.
Most existing FL methods assume that domain labels are provided during training, and their evaluation imposes explicit constraints on the number of domains.
We propose Disentangled Prompt Tuning (DiPrompT), a method that tackles the above restrictions by learning adaptive prompts for domain generalization in a distributed manner.
arXiv Detail & Related papers (2024-03-11T15:58:15Z) - Role Prompting Guided Domain Adaptation with General Capability Preserve
for Large Language Models [55.51408151807268]
When tailored to specific domains, Large Language Models (LLMs) tend to experience catastrophic forgetting.
crafting a versatile model for multiple domains simultaneously often results in a decline in overall performance.
We present the RolE Prompting Guided Multi-Domain Adaptation (REGA) strategy.
arXiv Detail & Related papers (2024-03-05T08:22:41Z) - Hypernetwork-Driven Model Fusion for Federated Domain Generalization [26.492360039272942]
Federated Learning (FL) faces significant challenges with domain shifts in heterogeneous data.
We propose a robust framework, coined as hypernetwork-based Federated Fusion (hFedF), using hypernetworks for non-linear aggregation.
Our method employs client-specific embeddings and gradient alignment techniques to manage domain generalization effectively.
arXiv Detail & Related papers (2024-02-10T15:42:03Z) - HCVP: Leveraging Hierarchical Contrastive Visual Prompt for Domain
Generalization [69.33162366130887]
Domain Generalization (DG) endeavors to create machine learning models that excel in unseen scenarios by learning invariant features.
We introduce a novel method designed to supplement the model with domain-level and task-specific characteristics.
This approach aims to guide the model in more effectively separating invariant features from specific characteristics, thereby boosting the generalization.
arXiv Detail & Related papers (2024-01-18T04:23:21Z) - Domain-Controlled Prompt Learning [49.45309818782329]
Existing prompt learning methods often lack domain-awareness or domain-transfer mechanisms.
We propose a textbfDomain-Controlled Prompt Learning for the specific domains.
Our method achieves state-of-the-art performance in specific domain image recognition datasets.
arXiv Detail & Related papers (2023-09-30T02:59:49Z) - Learning Domain Invariant Prompt for Vision-Language Models [31.581652862478965]
We propose a novel prompt learning paradigm that directly generates emphdomain invariant prompt that can be generalized to unseen domains, called MetaPrompt.
Our method consistently and significantly outperforms existing methods.
arXiv Detail & Related papers (2022-12-08T11:23:24Z) - TAL: Two-stream Adaptive Learning for Generalizable Person
Re-identification [115.31432027711202]
We argue that both domain-specific and domain-invariant features are crucial for improving the generalization ability of re-id models.
We name two-stream adaptive learning (TAL) to simultaneously model these two kinds of information.
Our framework can be applied to both single-source and multi-source domain generalization tasks.
arXiv Detail & Related papers (2021-11-29T01:27:42Z) - Structured Latent Embeddings for Recognizing Unseen Classes in Unseen
Domains [108.11746235308046]
We propose a novel approach that learns domain-agnostic structured latent embeddings by projecting images from different domains.
Our experiments on the challenging DomainNet and DomainNet-LS benchmarks show the superiority of our approach over existing methods.
arXiv Detail & Related papers (2021-07-12T17:57:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.