Token-Level Prompt Mixture with Parameter-Free Routing for Federated Domain Generalization
- URL: http://arxiv.org/abs/2504.21063v1
- Date: Tue, 29 Apr 2025 11:06:03 GMT
- Title: Token-Level Prompt Mixture with Parameter-Free Routing for Federated Domain Generalization
- Authors: Shuai Gong, Chaoran Cui, Xiaolin Dong, Xiushan Nie, Lei Zhu, Xiaojun Chang,
- Abstract summary: Federated domain generalization (FedDG) aims to learn a globally generalizable model from decentralized clients with heterogeneous data.<n>Recent studies have introduced prompt learning to adapt vision-language models (VLMs) in FedDG by learning a single global prompt.<n>We propose TRIP, a Token-level prompt mixture with parameter-free routing framework for FedDG.
- Score: 51.562474873972086
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated domain generalization (FedDG) aims to learn a globally generalizable model from decentralized clients with heterogeneous data while preserving privacy. Recent studies have introduced prompt learning to adapt vision-language models (VLMs) in FedDG by learning a single global prompt. However, such a one-prompt-fits-all learning paradigm typically leads to performance degradation on personalized samples. Although the mixture of experts (MoE) offers a promising solution for specialization, existing MoE-based methods suffer from coarse image-level expert assignment and high communication costs from parameterized routers. To address these limitations, we propose TRIP, a Token-level prompt mixture with parameter-free routing framework for FedDG, which treats multiple prompts as distinct experts. Unlike existing image-level routing designs, TRIP assigns different tokens within an image to specific experts. To ensure communication efficiency, TRIP incorporates a parameter-free routing mechanism based on token clustering and optimal transport. The instance-specific prompt is then synthesized by aggregating experts, weighted by the number of tokens assigned to each. Additionally, TRIP develops an unbiased learning strategy for prompt experts, leveraging the VLM's zero-shot generalization capability. Extensive experiments across four benchmarks demonstrate that TRIP achieves optimal generalization results, with communication of only 1K parameters per round. Our code is available at https://github.com/GongShuai8210/TRIP.
Related papers
- Symbolic Mixture-of-Experts: Adaptive Skill-based Routing for Heterogeneous Reasoning [76.10639521319382]
We propose Symbolic-MoE, a symbolic, text-based, and gradient-free Mixture-of-Experts framework.
We show that Symbolic-MoE's instance-level expert selection improves performance by a large margin but -- when implemented naively -- can introduce a high computational overhead.
arXiv Detail & Related papers (2025-03-07T18:03:13Z) - Glider: Global and Local Instruction-Driven Expert Router [83.785832410832]
"Model MoErging" methods prioritize generalization to unseen tasks at the expense of performance on held-in tasks.
We propose Global and Local Instruction Driven Expert Router (GLIDER) that integrates a multi-scale routing mechanism.
GLIDER achieves substantially improved held-in performance while maintaining strong generalization on held-out tasks.
arXiv Detail & Related papers (2024-10-09T17:59:14Z) - Towards Personalized Federated Multi-Scenario Multi-Task Recommendation [22.095138650857436]
PF-MSMTrec is a novel framework for personalized federated multi-scenario multi-task recommendation.
We introduce a bottom-up joint learning mechanism to address the unique challenges of multiple optimization conflicts.
Our proposed method outperforms state-of-the-art approaches.
arXiv Detail & Related papers (2024-06-27T07:10:37Z) - Multi-Task Dense Prediction via Mixture of Low-Rank Experts [35.11968315125389]
We present a novel decoder-focused method for multi-task dense prediction, called Mixture-of-Low-Rank-Experts (MLoRE)
To model the global task relationships, MLoRE adds a generic convolution path to the original MoE structure, where each task feature can go through this path for explicit parameter sharing.
Our experiments show that our MLoRE achieves superior performance compared to previous state-of-the-art methods on all metrics.
arXiv Detail & Related papers (2024-03-26T14:40:17Z) - MoPE: Mixture of Prompt Experts for Parameter-Efficient and Scalable Multimodal Fusion [29.46189153751869]
Mixture of Prompt Experts (MoPE) is first technique designed to overcome limitations by decomposing standard prompts.
Our MoPE-based fusion method exhibits greater expressiveness, scaling more effectively with the training data and the overall number of trainable parameters.
arXiv Detail & Related papers (2024-03-14T17:47:10Z) - Learning to Route Among Specialized Experts for Zero-Shot Generalization [39.56470042680907]
We propose Post-Hoc Adaptive Tokenwise Gating Over an Ocean of Specialized Experts (PHATGOOSE)
It learns to route among specialized modules that were produced through parameter-efficient fine-tuning.
It does not require simultaneous access to the datasets used to create the specialized models and only requires a modest amount of additional compute after each expert model is trained.
arXiv Detail & Related papers (2024-02-08T17:43:22Z) - Unlocking the Potential of Prompt-Tuning in Bridging Generalized and
Personalized Federated Learning [49.72857433721424]
Vision Transformers (ViT) and Visual Prompt Tuning (VPT) achieve state-of-the-art performance with improved efficiency in various computer vision tasks.
We present a novel algorithm, SGPT, that integrates Generalized FL (GFL) and Personalized FL (PFL) approaches by employing a unique combination of both shared and group-specific prompts.
arXiv Detail & Related papers (2023-10-27T17:22:09Z) - Personalized Federated Learning via Amortized Bayesian Meta-Learning [21.126405589760367]
We introduce a new perspective on personalized federated learning through Amortized Bayesian Meta-Learning.
Specifically, we propose a novel algorithm called emphFedABML, which employs hierarchical variational inference across clients.
Our theoretical analysis provides an upper bound on the average generalization error and guarantees the generalization performance on unseen data.
arXiv Detail & Related papers (2023-07-05T11:58:58Z) - Divide, Conquer, and Combine: Mixture of Semantic-Independent Experts
for Zero-Shot Dialogue State Tracking [83.40120598637665]
Zero-shot transfer learning for Dialogue State Tracking (DST) helps to handle a variety of task-oriented dialogue domains without the cost of collecting in-domain data.
Existing works mainly study common data- or model-level augmentation methods to enhance the generalization.
We present a simple and effective "divide, conquer and combine" solution, which explicitly disentangles the semantics of seen data.
arXiv Detail & Related papers (2023-06-01T08:21:20Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.