SDFed: Bridging Local Global Discrepancy via Subspace Refinement and Divergence Control in Federated Prompt Learning
- URL: http://arxiv.org/abs/2602.08590v1
- Date: Mon, 09 Feb 2026 12:33:00 GMT
- Title: SDFed: Bridging Local Global Discrepancy via Subspace Refinement and Divergence Control in Federated Prompt Learning
- Authors: Yicheng Di, Wei Yuan, Tieke He, Zhanjie Zhang, Ao Ma, Yuan Liu, Hongzhi Yin,
- Abstract summary: Vision-language pretrained models offer strong transferable representations, yet adapting them in privacy-sensitive multi-party settings is challenging.<n>We propose textbfSDFed, a heterogeneous federated prompt learning framework that bridges Local-Global Discrepancy via Subspace Refinement and Divergence Control.
- Score: 37.399623527540754
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vision-language pretrained models offer strong transferable representations, yet adapting them in privacy-sensitive multi-party settings is challenging due to the high communication cost of federated optimization and the limited local data on clients. Federated prompt learning mitigates this issue by keeping the VLPM backbone frozen and collaboratively training lightweight prompt parameters. However, existing approaches typically enforce a unified prompt structure and length across clients, which is inadequate under practical client heterogeneity in both data distributions and system resources, and may further introduce conflicts between globally shared and locally optimal knowledge. To address these challenges, we propose \textbf{SDFed}, a heterogeneous federated prompt learning framework that bridges Local-Global Discrepancy via Subspace Refinement and Divergence Control. SDFed maintains a fixed-length global prompt for efficient aggregation while allowing each client to learn a variable-length local prompt to better match its data characteristics and capacity. To mitigate local-global conflicts and facilitate effective knowledge transfer, SDFed introduces a subspace refinement method for local prompts and an information retention and divergence control strategy that preserves key local information while maintaining appropriate separability between global and local representations. Extensive experiments on several datasets demonstrate that SDFed consistently improves performance and robustness in heterogeneous federated settings.
Related papers
- Personalized Federated Learning via Dual-Prompt Optimization and Cross Fusion [44.8670376715096]
Federated learning (FL) enables collaborative model training across decentralized clients without sharing local data.<n>We propose a personalized FL framework based on dual-prompt learning and cross fusion, termed pFedDC.
arXiv Detail & Related papers (2025-06-26T10:59:14Z) - FOCoOp: Enhancing Out-of-Distribution Robustness in Federated Prompt Learning for Vision-Language Models [56.31350619667909]
Federated prompt learning (FPL) for vision-language models is a powerful approach to collaboratively adapt models across distributed clients.<n>Existing FPL approaches suffer from a trade-off between performance and robustness, particularly in out-of-distribution (OOD) shifts.<n>We introduce a Federated OOD-aware Context Optimization (FOCoOp) framework, which captures diverse distributions among clients.
arXiv Detail & Related papers (2025-06-19T11:16:02Z) - Hierarchical Knowledge Structuring for Effective Federated Learning in Heterogeneous Environments [0.6144680854063939]
Federated learning enables collaborative model training across distributed entities while maintaining individual data privacy.<n>Recent efforts leverage logit-based knowledge aggregation and distillation to overcome these issues.<n>We propose a Hierarchical Knowledge Structuring (HKS) framework that formulates sample logits into a multi-granularity codebook.
arXiv Detail & Related papers (2025-04-04T15:06:02Z) - Federated Domain Generalization via Prompt Learning and Aggregation [20.933631678895765]
Federated domain generalization (FedDG) aims to improve the global model generalization in unseen domains.
A common strategy in existing FedDG studies involves sharing domain-specific knowledge among clients.
We introduce prompt learning to adapt pre-trained vision-language models (VLMs) in the FedDG scenario.
arXiv Detail & Related papers (2024-11-15T09:26:00Z) - Global and Local Prompts Cooperation via Optimal Transport for Federated Learning [13.652593797756774]
We present Federated Prompts Cooperation via Optimal Transport (FedOTP), which introduces efficient collaborative prompt learning strategies to capture diverse category traits on a per-client basis.
Specifically, for each client, we learn a global prompt to extract consensus knowledge among clients, and a local prompt to capture client-specific category characteristics.
Unbalanced Optimal Transport is then employed to align local visual features with these prompts, striking a balance between global consensus and local personalization.
arXiv Detail & Related papers (2024-02-29T11:43:04Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z) - FedDC: Federated Learning with Non-IID Data via Local Drift Decoupling
and Correction [48.85303253333453]
Federated learning (FL) allows multiple clients to collectively train a high-performance global model without sharing their private data.
We propose a novel federated learning algorithm with local drift decoupling and correction (FedDC)
Our FedDC only introduces lightweight modifications in the local training phase, in which each client utilizes an auxiliary local drift variable to track the gap between the local model parameter and the global model parameters.
Experiment results and analysis demonstrate that FedDC yields expediting convergence and better performance on various image classification tasks.
arXiv Detail & Related papers (2022-03-22T14:06:26Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.