Federated LoRA with Sparse Communication
- URL: http://arxiv.org/abs/2406.05233v1
- Date: Fri, 7 Jun 2024 19:42:05 GMT
- Title: Federated LoRA with Sparse Communication
- Authors: Kevin Kuo, Arian Raje, Kousik Rajesh, Virginia Smith,
- Abstract summary: Low-rank adaptation (LoRA) is a natural method for finetuning in communication-constrained machine learning settings.
In this work, we consider techniques for further improving communication-efficiency in federated LoRA.
- Score: 12.965591289179372
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-rank adaptation (LoRA) is a natural method for finetuning in communication-constrained machine learning settings such as cross-device federated learning. Prior work that has studied LoRA in the context of federated learning has focused on improving LoRA's robustness to heterogeneity and privacy. In this work, we instead consider techniques for further improving communication-efficiency in federated LoRA. Unfortunately, we show that centralized ML methods that improve the efficiency of LoRA through unstructured pruning do not transfer well to federated settings. We instead study a simple approach, \textbf{FLASC}, that applies sparsity to LoRA during communication while allowing clients to locally fine-tune the entire LoRA module. Across four common federated learning tasks, we demonstrate that this method matches the performance of dense LoRA with up to $10\times$ less communication. Additionally, despite being designed primarily to target communication, we find that this approach has benefits in terms of heterogeneity and privacy relative to existing approaches tailored to these specific concerns. Overall, our work highlights the importance of considering system-specific constraints when developing communication-efficient finetuning approaches, and serves as a simple and competitive baseline for future work in federated finetuning.
Related papers
- Towards Federated Low-Rank Adaptation with Rank-Heterogeneous Communication [12.515874333424929]
We find that the empirical performance of low-rank adaptation (LoRA) is highly unstable with respect to rank-heterogeneity.
The root cause of this instability is the zero-padding-based aggregation strategy adopted in conventional federated LoRA frameworks.
We propose a new replication-based padding strategy, which allows us to better leverage the information from clients with high-quality datasets.
arXiv Detail & Related papers (2024-06-25T11:49:33Z) - Retrieval-Augmented Mixture of LoRA Experts for Uploadable Machine Learning [57.36978335727009]
Low-Rank Adaptation (LoRA) offers an efficient way to fine-tune large language models (LLMs)
In this paper, we propose a framework that adaptively retrieves and composes multiple LoRAs based on input prompts.
arXiv Detail & Related papers (2024-06-24T05:24:41Z) - Mixture of LoRA Experts [87.50120181861362]
This paper introduces the Mixture of LoRA Experts (MoLE) approach, which harnesses hierarchical control and unfettered branch selection.
The MoLE approach achieves superior LoRA fusion performance in comparison to direct arithmetic merging.
arXiv Detail & Related papers (2024-04-21T11:59:53Z) - Personalized Wireless Federated Learning for Large Language Models [75.22457544349668]
Large Language Models (LLMs) have revolutionized natural language processing tasks.
Their deployment in wireless networks still face challenges, i.e., a lack of privacy and security protection mechanisms.
We introduce two personalized wireless federated fine-tuning methods with low communication overhead.
arXiv Detail & Related papers (2024-04-20T02:30:21Z) - ALoRA: Allocating Low-Rank Adaptation for Fine-tuning Large Language Models [8.251547772610301]
We extend the methodology of low-rank adaptation (LoRA) to an innovative approach we call allocating low-rank adaptation (ALoRA)
First, we propose a novel method, AB-LoRA, that can effectively estimate the importance score of each LoRA rank.
Second, guided by AB-LoRA, we gradually prune abundant and negatively impacting LoRA ranks and allocate the pruned LoRA budgets to important Transformer modules needing higher ranks.
arXiv Detail & Related papers (2024-03-24T15:09:55Z) - Improving LoRA in Privacy-preserving Federated Learning [44.47315926976059]
Low-rank adaptation (LoRA) is one of the most popular task-specific parameter-efficient fine-tuning (PEFT) methods on pre-trained language models.
This paper proposes an efficient and effective version of LoRA, Federated Freeze A LoRA (FFA-LoRA), to alleviate these challenges.
arXiv Detail & Related papers (2024-03-18T23:20:08Z) - ResLoRA: Identity Residual Mapping in Low-Rank Adaption [96.59370314485074]
We propose ResLoRA, an improved framework of low-rank adaptation (LoRA)
Our method can achieve better results in fewer training steps without any extra trainable parameters or inference cost compared to LoRA.
The experiments on NLG, NLU, and text-to-image tasks demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2024-02-28T04:33:20Z) - Heterogeneous LoRA for Federated Fine-tuning of On-Device Foundation
Models [20.707283766914017]
HetLoRA allows heterogeneous ranks across client devices and efficiently aggregates and distributes these heterogeneous LoRA modules.
HetLoRA achieves improved convergence speed and final performance compared to homogeneous LoRA.
arXiv Detail & Related papers (2024-01-12T07:52:07Z) - Sparse Low-rank Adaptation of Pre-trained Language Models [79.74094517030035]
We introduce sparse low-rank adaptation (SoRA) that enables dynamic adjustments to the intrinsic rank during the adaptation process.
Our approach strengthens the representation power of LoRA by initializing it with a higher rank, while efficiently taming a temporarily increased number of parameters.
Our experimental results demonstrate that SoRA can outperform other baselines even with 70% retained parameters and 70% training time.
arXiv Detail & Related papers (2023-11-20T11:56:25Z) - pFedLoRA: Model-Heterogeneous Personalized Federated Learning with LoRA
Tuning [35.59830784463706]
Federated learning (FL) is an emerging machine learning paradigm in which a central server coordinates multiple participants (clients) collaboratively to train on decentralized data.
We propose a novel and efficient model-heterogeneous personalized Federated learning framework based on LoRA tuning (pFedLoRA)
Experiments on two benchmark datasets demonstrate that pFedLoRA outperforms six state-of-the-art baselines.
arXiv Detail & Related papers (2023-10-20T05:24:28Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.