SDFLoRA: Selective Dual-Module LoRA for Federated Fine-tuning with Heterogeneous Clients
- URL: http://arxiv.org/abs/2601.11219v1
- Date: Fri, 16 Jan 2026 11:53:38 GMT
- Title: SDFLoRA: Selective Dual-Module LoRA for Federated Fine-tuning with Heterogeneous Clients
- Authors: Zhikang Shen, Jianrong Lu, Haiyuan Wan, Jianhai Chen,
- Abstract summary: Federated learning for large language models (LLMs) has attracted increasing attention as a way to enable privacy-preserving adaptation over distributed data.<n>We propose Selective Dual-module Federated LoRA (SDFLoRA), which decomposes each client into a global module that captures transferable knowledge and a local module that preserves client-specific adaptations.<n> Experiments on GLUE benchmarks demonstrate that SDFLoRA outperforms representative federated LoRA baselines and achieves a better utility-privacy trade-off.
- Score: 4.862708813950415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) for large language models (LLMs) has attracted increasing attention as a way to enable privacy-preserving adaptation over distributed data. Parameter-efficient methods such as LoRA are widely adopted to reduce communication and memory costs. Despite these advances, practical FL deployments often exhibit rank heterogeneity, since different clients may use different low-rank configurations. This makes direct aggregation of LoRA updates biased and unstable. Existing solutions typically enforce unified ranks or align heterogeneous updates into a shared subspace, which over-constrains client-specific semantics, limits personalization, and provides weak protection of local client information under differential privacy noise. To address this issue, we propose Selective Dual-module Federated LoRA (SDFLoRA), which decomposes each client adapter into a global module that captures transferable knowledge and a local module that preserves client-specific adaptations. The global module is selectively aligned and aggregated across clients, while local modules remain private. This design enables robust learning under rank heterogeneity and supports privacy-aware optimization by injecting differential privacy noise exclusively into the global module. Experiments on GLUE benchmarks demonstrate that SDFLoRA outperforms representative federated LoRA baselines and achieves a better utility-privacy trade-off.
Related papers
- HiLoRA: Hierarchical Low-Rank Adaptation for Personalized Federated Learning [11.466314810697169]
Low-Rank Adaptation (LoRA) provides efficient and communication-friendly way to adapt Vision Transformers (ViTs)<n>We propose HiLoRA, a hierarchical LoRA framework that places adapters at three levels: root, cluster, and leaf.<n>We develop a LoRA-Subspace Adaptive Clustering mechanism that infers latent client groups via subspace similarity analysis.
arXiv Detail & Related papers (2026-03-03T09:25:16Z) - WinFLoRA: Incentivizing Client-Adaptive Aggregation in Federated LoRA under Privacy Heterogeneity [13.687946058105156]
WinFLoRA is a privacy-heterogeneous federated LoRA that utilizes aggregation weights as incentives with noise awareness.<n>WinFLoRA achieves up to 52.58% higher global accuracy and up to 2.56x client utility than state-of-the-art benchmarks.
arXiv Detail & Related papers (2026-02-01T09:52:57Z) - Replacing Parameters with Preferences: Federated Alignment of Heterogeneous Vision-Language Models [63.70401095689976]
We argue that replacing parameters with preferences represents a more scalable and privacy-preserving future.<n>We propose MoR, a federated alignment framework based on GRPO with Mixture-of-Rewards for heterogeneous VLMs.<n>MoR consistently outperforms federated alignment baselines in generalization, robustness, and cross-client adaptability.
arXiv Detail & Related papers (2026-01-31T03:11:51Z) - FedVLM: Scalable Personalized Vision-Language Models through Federated Learning [3.2948524285562866]
Vision-language models (VLMs) demonstrate impressive zero-shot and few-shot learning capabilities.<n>Fine-tuning these models at scale remains challenging in federated environments where data is decentralized and non-iid across clients.<n>We propose FedVLM, a federated LoRA fine-tuning framework that enables decentralized adaptation of VLMs while preserving model privacy.
arXiv Detail & Related papers (2025-07-23T00:05:02Z) - FedRand: Enhancing Privacy in Federated Learning with Randomized LoRA Subparameter Updates [58.18162789618869]
Federated Learning (FL) is a widely used framework for training models in a decentralized manner.<n>We propose the FedRand framework, which avoids disclosing the full set of client parameters.<n>We empirically validate that FedRand improves robustness against MIAs compared to relevant baselines.
arXiv Detail & Related papers (2025-03-10T11:55:50Z) - Federated Sketching LoRA: A Flexible Framework for Heterogeneous Collaborative Fine-Tuning of LLMs [37.03583502049329]
Fine-tuning large language models (LLMs) on resource-constrained clients remains a challenging problem.<n>Recent works have fused low-rank adaptation (LoRA) techniques with federated fine-tuning to mitigate challenges associated with client model sizes and data scarcity.<n>We propose federated sketching LoRA, which leverages a sketching mechanism to enable clients to update submatrices of global LoRA modules maintained by the server.
arXiv Detail & Related papers (2025-01-31T18:44:35Z) - Fed-pilot: Optimizing LoRA Allocation for Efficient Federated Fine-Tuning with Heterogeneous Clients [11.102441622530181]
We propose Fed-pilot, a memory-efficient federated fine-tuning framework.<n>It enables memory-constrained clients to participate in Low-Rank Adaptation (LoRA)-based fine-tuning by training only a subset of LoRA modules locally.<n>To the best of our knowledge, this is the first study on federated fine-tuning of FMs that integrates memory-constrained optimization.
arXiv Detail & Related papers (2024-10-14T06:36:41Z) - Communication-Efficient Personalized Federated Learning for Speech-to-Text Tasks [64.02867484165476]
To protect privacy and meet legal regulations, federated learning (FL) has gained significant attention for training speech-to-text (S2T) systems.<n>The commonly used FL approach (i.e., textscFedAvg) in S2T tasks typically suffers from extensive communication overhead.<n>We propose a personalized federated S2T framework that introduces textscFedLoRA, a lightweight LoRA module for client-side tuning and interaction with the server, and textscFedMem, a global model equipped with a $k$-near
arXiv Detail & Related papers (2024-01-18T15:39:38Z) - FedRA: A Random Allocation Strategy for Federated Tuning to Unleash the
Power of Heterogeneous Clients [50.13097183691517]
In real-world federated scenarios, there often exist a multitude of heterogeneous clients with varying computation and communication resources.
We propose a novel federated tuning algorithm, FedRA.
In each communication round, FedRA randomly generates an allocation matrix.
It reorganizes a small number of layers from the original model based on the allocation matrix and fine-tunes using adapters.
arXiv Detail & Related papers (2023-11-19T04:43:16Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Subspace based Federated Unlearning [75.90552823500633]
Federated unlearning (FL) aims to remove a specified target client's contribution in FL to satisfy the user's right to be forgotten.
Most existing federated unlearning algorithms require the server to store the history of the parameter updates.
We propose a simple-yet-effective subspace based federated unlearning method, dubbed SFU, that lets the global model perform gradient ascent.
arXiv Detail & Related papers (2023-02-24T04:29:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.