Preventing Rank Collapse in Federated Low-Rank Adaptation with Client Heterogeneity
- URL: http://arxiv.org/abs/2602.13486v1
- Date: Fri, 13 Feb 2026 21:42:06 GMT
- Title: Preventing Rank Collapse in Federated Low-Rank Adaptation with Client Heterogeneity
- Authors: Fei Wu, Jia Hu, Geyong Min, Shiqiang Wang,
- Abstract summary: Federated low-rank adaptation (FedLoRA) has facilitated communication-efficient and privacy-preserving fine-tuning of foundation models for downstream tasks.<n>We identify a previously overlooked phenomenon in heterogeneous FedLoRA, termed rank collapse, where the energy of the global update concentrates on the minimum shared rank.<n>We propose raFLoRA, a rank-partitioned aggregation method that decomposes local updates into rank partitions and then aggregates each partition weighted by its effective client contributions.
- Score: 43.719298075378425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated low-rank adaptation (FedLoRA) has facilitated communication-efficient and privacy-preserving fine-tuning of foundation models for downstream tasks. In practical federated learning scenarios, client heterogeneity in system resources and data distributions motivates heterogeneous LoRA ranks across clients. We identify a previously overlooked phenomenon in heterogeneous FedLoRA, termed rank collapse, where the energy of the global update concentrates on the minimum shared rank, resulting in suboptimal performance and high sensitivity to rank configurations. Through theoretical analysis, we reveal the root cause of rank collapse: a mismatch between rank-agnostic aggregation weights and rank-dependent client contributions, which systematically suppresses higher-rank updates at a geometric rate over rounds. Motivated by this insight, we propose raFLoRA, a rank-partitioned aggregation method that decomposes local updates into rank partitions and then aggregates each partition weighted by its effective client contributions. Extensive experiments across classification and reasoning tasks show that raFLoRA prevents rank collapse, improves model performance, and preserves communication efficiency compared to state-of-the-art FedLoRA baselines.
Related papers
- ILoRA: Federated Learning with Low-Rank Adaptation for Heterogeneous Client Aggregation [15.926254171159146]
We propose ILoRA, a unified framework that integrates three core innovations.<n>ILoRA consistently achieves superior accuracy and convergence stability compared to existing federated LoRA methods.
arXiv Detail & Related papers (2025-11-20T05:59:37Z) - Class-wise Balancing Data Replay for Federated Class-Incremental Learning [49.179631011790065]
We propose a class wise balancing data replay method for Federated Class Incremental Learning (FCIL)<n>FedCBDR has two key components: 1) the global-perspective data replay module reconstructs global representations of prior task in a privacy-preserving manner, which then guides a class-aware and importance-sensitive sampling strategy to achieve balanced replay; 2) Subsequently, to handle class imbalance across tasks, the task aware temperature scaling module adaptively adjusts the temperature of logits at both class and instance levels based on task dynamics, which reduces the model's overconfidence in majority classes while enhancing its sensitivity to minority classes.
arXiv Detail & Related papers (2025-07-10T12:46:31Z) - Beyond Low-Rank Tuning: Model Prior-Guided Rank Allocation for Effective Transfer in Low-Data and Large-Gap Regimes [9.4848188271008]
Low-Rank Adaptation (LoRA) has proven effective in reducing computational costs while maintaining performance comparable to fully fine-tuned foundation models.<n>Current adaptive LoRA methods attempt to overcome this limitation by dynamically expanding or selectively allocating ranks.<n>We introduce Stable Rank-Guided Low-Rank Adaptation (SR-LoRA), a novel framework that utilizes the stable rank of pre-trained weight matrices as a natural prior for layer-wise rank allocation.
arXiv Detail & Related papers (2025-06-30T23:54:23Z) - NDCG-Consistent Softmax Approximation with Accelerated Convergence [67.10365329542365]
We propose novel loss formulations that align directly with ranking metrics.<n>We integrate the proposed RG losses with the highly efficient Alternating Least Squares (ALS) optimization method.<n> Empirical evaluations on real-world datasets demonstrate that our approach achieves comparable or superior ranking performance.
arXiv Detail & Related papers (2025-06-11T06:59:17Z) - FedHL: Federated Learning for Heterogeneous Low-Rank Adaptation via Unbiased Aggregation [6.5370850242187855]
Federated Learning (FL) facilitates the fine-tuning of Foundation Models (FMs) using distributed data sources.<n>Low-Rank Adaptation (LoRA) gaining popularity due to its low communication costs and strong performance.<n>Existing methods lack formal convergence guarantees due to parameter truncation and biased gradient updates.
arXiv Detail & Related papers (2025-05-24T04:12:12Z) - FedALT: Federated Fine-Tuning through Adaptive Local Training with Rest-of-World LoRA [5.162783756846019]
Fine-tuning large language models (LLMs) in federated settings enables privacy-preserving adaptation but suffers from cross-client interference due to model aggregation.<n>We propose textbfFedALT, a novel personalized federated LoRA fine-tuning algorithm.<n>We demonstrate that FedALT significantly outperforms state-of-the-art personalized federated LoRA fine-tuning methods.
arXiv Detail & Related papers (2025-03-14T21:07:46Z) - Communication-Efficient Federated Low-Rank Update Algorithm and its Connection to Implicit Regularization [11.955062839855334]
Federated Learning (FL) faces significant challenges related to communication efficiency and heterogeneity.
We propose FedLoRU, a general low-rank update framework for federated learning.
Our framework enforces low-rank client-side updates and accumulates these updates to form a higher-rank model.
arXiv Detail & Related papers (2024-09-19T00:11:58Z) - Towards Federated Low-Rank Adaptation of Language Models with Rank Heterogeneity [12.515874333424929]
We observe that heterogeneous ranks among clients lead to unstable performance.<n>Our analysis attributes this instability to the conventional zero-padding aggregation strategy.<n>We propose a replication-based padding strategy that better retains valuable information from clients with high-quality data.
arXiv Detail & Related papers (2024-06-25T11:49:33Z) - PRILoRA: Pruned and Rank-Increasing Low-Rank Adaptation [65.268245109828]
We introduce PRILoRA, which linearly allocates a different rank for each layer, in an increasing manner, and performs pruning throughout the training process.
We validate the effectiveness of PRILoRA through extensive experiments on eight GLUE benchmarks, setting a new state of the art.
arXiv Detail & Related papers (2024-01-20T20:25:17Z) - Sparse Low-rank Adaptation of Pre-trained Language Models [79.74094517030035]
We introduce sparse low-rank adaptation (SoRA) that enables dynamic adjustments to the intrinsic rank during the adaptation process.
Our approach strengthens the representation power of LoRA by initializing it with a higher rank, while efficiently taming a temporarily increased number of parameters.
Our experimental results demonstrate that SoRA can outperform other baselines even with 70% retained parameters and 70% training time.
arXiv Detail & Related papers (2023-11-20T11:56:25Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.