ILoRA: Federated Learning with Low-Rank Adaptation for Heterogeneous Client Aggregation
- URL: http://arxiv.org/abs/2511.16069v1
- Date: Thu, 20 Nov 2025 05:59:37 GMT
- Title: ILoRA: Federated Learning with Low-Rank Adaptation for Heterogeneous Client Aggregation
- Authors: Junchao Zhou, Junkang Liu, Fanhua Shang,
- Abstract summary: We propose ILoRA, a unified framework that integrates three core innovations.<n>ILoRA consistently achieves superior accuracy and convergence stability compared to existing federated LoRA methods.
- Score: 15.926254171159146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning with Low-Rank Adaptation (LoRA) faces three critical challenges under client heterogeneity: (1) Initialization-Induced Instability due to random initialization misaligning client subspaces; (2) Rank Incompatibility and Aggregation Error when averaging LoRA parameters of different ranks, which biases the global model; and (3) exacerbated Client Drift under Non-IID Data, impairing generalization. To address these challenges, we propose ILoRA, a unified framework that integrates three core innovations: a QR-based orthonormal initialization to ensure all clients start in a coherent subspace; a Concatenated QR Aggregation mechanism that fuses heterogeneous-rank updates via concatenation and decomposition, preserving information while maintaining dimension alignment; and an AdamW optimizer with rank-aware control variates to correct local updates and mitigate client drift. Supported by theoretical convergence guarantees, extensive experiments on vision and NLP benchmarks demonstrate that ILoRA consistently achieves superior accuracy and convergence stability compared to existing federated LoRA methods.
Related papers
- FedRot-LoRA: Mitigating Rotational Misalignment in Federated LoRA [25.49850401602623]
Federated LoRA provides a communication-efficient mechanism for fine-tuning large language models on decentralized data.<n>In practice, a discrepancy between the factor-wise averaging used to preserve low rank and the mathematically correct aggregation of local updates can cause significant aggregation error and unstable training.<n>We propose FedRot-LoRA, a framework that aligns client updates via transformations prior to aggregation.
arXiv Detail & Related papers (2026-02-27T03:18:32Z) - Preventing Rank Collapse in Federated Low-Rank Adaptation with Client Heterogeneity [43.719298075378425]
Federated low-rank adaptation (FedLoRA) has facilitated communication-efficient and privacy-preserving fine-tuning of foundation models for downstream tasks.<n>We identify a previously overlooked phenomenon in heterogeneous FedLoRA, termed rank collapse, where the energy of the global update concentrates on the minimum shared rank.<n>We propose raFLoRA, a rank-partitioned aggregation method that decomposes local updates into rank partitions and then aggregates each partition weighted by its effective client contributions.
arXiv Detail & Related papers (2026-02-13T21:42:06Z) - Roughness-Informed Federated Learning [3.8218584696400484]
Federated Learning (FL) enables collaborative model training across distributed clients.<n>FL faces challenges in non-independent and identically distributed (non-IID) settings due to client drift.<n>We propose RI-FedAvg, a novel FL that mitigates client drift by incorporating a Roughness Index (RI)-based regularization term.
arXiv Detail & Related papers (2026-02-11T07:35:45Z) - Adaptive Dual-Weighting Framework for Federated Learning via Out-of-Distribution Detection [53.45696787935487]
Federated Learning (FL) enables collaborative model training across large-scale distributed service nodes.<n>In real-world service-oriented deployments, data generated by heterogeneous users, devices, and application scenarios are inherently non-IID.<n>We propose FLood, a novel FL framework inspired by out-of-distribution (OOD) detection.
arXiv Detail & Related papers (2026-02-01T05:54:59Z) - Replacing Parameters with Preferences: Federated Alignment of Heterogeneous Vision-Language Models [63.70401095689976]
We argue that replacing parameters with preferences represents a more scalable and privacy-preserving future.<n>We propose MoR, a federated alignment framework based on GRPO with Mixture-of-Rewards for heterogeneous VLMs.<n>MoR consistently outperforms federated alignment baselines in generalization, robustness, and cross-client adaptability.
arXiv Detail & Related papers (2026-01-31T03:11:51Z) - CO-PFL: Contribution-Oriented Personalized Federated Learning for Heterogeneous Networks [51.43780477302533]
Contribution-Oriented PFL (CO-PFL) is a novel algorithm that dynamically estimates each client's contribution for global aggregation.<n>CO-PFL consistently surpasses state-of-the-art methods in robustness in personalization accuracy, robustness, scalability and convergence stability.
arXiv Detail & Related papers (2025-10-23T05:10:06Z) - Rediscovering Entropy Regularization: Adaptive Coefficient Unlocks Its Potential for LLM Reinforcement Learning [55.59724323303857]
We propose a framework that balances exploration and exploitation via three components: difficulty-aware coefficient allocation, initial-anchored target entropy, and dynamic global coefficient adjustment.<n>Experiments on multiple mathematical reasoning benchmarks show that AER consistently outperforms baselines, improving both reasoning accuracy and exploration capability.
arXiv Detail & Related papers (2025-10-13T03:10:26Z) - AFLoRA: Adaptive Federated Fine-Tuning of Large Language Models with Resource-Aware Low-Rank Adaption [3.805501490912696]
Federated fine-tuning has emerged as a promising approach to adapt foundation models to downstream tasks using decentralized data.<n>We propose AFLoRA, an adaptive and lightweight federated fine-tuning framework for Large Language Models.
arXiv Detail & Related papers (2025-05-30T16:35:32Z) - FedHL: Federated Learning for Heterogeneous Low-Rank Adaptation via Unbiased Aggregation [6.5370850242187855]
Federated Learning (FL) facilitates the fine-tuning of Foundation Models (FMs) using distributed data sources.<n>Low-Rank Adaptation (LoRA) gaining popularity due to its low communication costs and strong performance.<n>Existing methods lack formal convergence guarantees due to parameter truncation and biased gradient updates.
arXiv Detail & Related papers (2025-05-24T04:12:12Z) - HAFLQ: Heterogeneous Adaptive Federated LoRA Fine-tuned LLM with Quantization [55.972018549438964]
Federated fine-tuning of pre-trained Large Language Models (LLMs) enables task-specific adaptation across diverse datasets while preserving privacy.<n>We propose HAFLQ (Heterogeneous Adaptive Federated Low-Rank Adaptation Fine-tuned LLM with Quantization), a novel framework for efficient and scalable fine-tuning of LLMs in heterogeneous environments.<n> Experimental results on the text classification task demonstrate that HAFLQ reduces memory usage by 31%, lowers communication cost by 49%, improves accuracy by 50%, and achieves faster convergence compared to the baseline method.
arXiv Detail & Related papers (2024-11-10T19:59:54Z) - Towards Federated Low-Rank Adaptation of Language Models with Rank Heterogeneity [12.515874333424929]
We observe that heterogeneous ranks among clients lead to unstable performance.<n>Our analysis attributes this instability to the conventional zero-padding aggregation strategy.<n>We propose a replication-based padding strategy that better retains valuable information from clients with high-quality data.
arXiv Detail & Related papers (2024-06-25T11:49:33Z) - LoRA-Ensemble: Efficient Uncertainty Modelling for Self-Attention Networks [52.46420522934253]
We introduce LoRA-Ensemble, a parameter-efficient ensembling method for self-attention networks.<n>The method not only outperforms state-of-the-art implicit techniques like BatchEnsemble, but even matches or exceeds the accuracy of an Explicit Ensemble.
arXiv Detail & Related papers (2024-05-23T11:10:32Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - Disentangled Federated Learning for Tackling Attributes Skew via
Invariant Aggregation and Diversity Transferring [104.19414150171472]
Attributes skews the current federated learning (FL) frameworks from consistent optimization directions among the clients.
We propose disentangled federated learning (DFL) to disentangle the domain-specific and cross-invariant attributes into two complementary branches.
Experiments verify that DFL facilitates FL with higher performance, better interpretability, and faster convergence rate, compared with SOTA FL methods.
arXiv Detail & Related papers (2022-06-14T13:12:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.