Heterogeneous LoRA for Federated Fine-tuning of On-Device Foundation
Models
- URL: http://arxiv.org/abs/2401.06432v2
- Date: Tue, 20 Feb 2024 21:15:59 GMT
- Title: Heterogeneous LoRA for Federated Fine-tuning of On-Device Foundation
Models
- Authors: Yae Jee Cho and Luyang Liu and Zheng Xu and Aldi Fahrezi and Gauri
Joshi
- Abstract summary: HetLoRA allows heterogeneous ranks across client devices and efficiently aggregates and distributes these heterogeneous LoRA modules.
HetLoRA achieves improved convergence speed and final performance compared to homogeneous LoRA.
- Score: 20.707283766914017
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Foundation models (FMs) adapt well to specific domains or tasks with
fine-tuning, and federated learning (FL) enables the potential for
privacy-preserving fine-tuning of the FMs with on-device local data. For
federated fine-tuning of FMs, we consider the FMs with small to medium
parameter sizes of single digit billion at maximum, referred to as on-device
FMs (ODFMs) that can be deployed on devices for inference but can only be
fine-tuned with parameter efficient methods. In our work, we tackle the data
and system heterogeneity problem of federated fine-tuning of ODFMs by proposing
a novel method using heterogeneous low-rank approximations (LoRAs), namely
HetLoRA. First, we show that the naive approach of using homogeneous LoRA ranks
across devices face a trade-off between overfitting and slow convergence, and
thus propose HetLoRA, which allows heterogeneous ranks across client devices
and efficiently aggregates and distributes these heterogeneous LoRA modules. By
applying rank self-pruning locally and sparsity-weighted aggregation at the
server, HetLoRA combines the advantages of high and low-rank LoRAs, which
achieves improved convergence speed and final performance compared to
homogeneous LoRA. Furthermore, HetLoRA offers enhanced computation efficiency
compared to full fine-tuning, making it suitable for federated fine-tuning
across heterogeneous devices.
Related papers
- Federated LoRA with Sparse Communication [12.965591289179372]
Low-rank adaptation (LoRA) is a natural method for finetuning in communication-constrained machine learning settings.
In this work, we consider techniques for further improving communication-efficiency in federated LoRA.
arXiv Detail & Related papers (2024-06-07T19:42:05Z) - Mixture of LoRA Experts [87.50120181861362]
This paper introduces the Mixture of LoRA Experts (MoLE) approach, which harnesses hierarchical control and unfettered branch selection.
The MoLE approach achieves superior LoRA fusion performance in comparison to direct arithmetic merging.
arXiv Detail & Related papers (2024-04-21T11:59:53Z) - Improving LoRA in Privacy-preserving Federated Learning [44.47315926976059]
Low-rank adaptation (LoRA) is one of the most popular task-specific parameter-efficient fine-tuning (PEFT) methods on pre-trained language models.
This paper proposes an efficient and effective version of LoRA, Federated Freeze A LoRA (FFA-LoRA), to alleviate these challenges.
arXiv Detail & Related papers (2024-03-18T23:20:08Z) - PRoLoRA: Partial Rotation Empowers More Parameter-Efficient LoRA [45.38491644250814]
Partially Rotation-enhanced Low-Rank Adaptation (PRoLoRA) is an intra-layer sharing mechanism.
PRoLoRA retains its advantages, and effectively circumvents the drawbacks of peer parameter-sharing methods.
Empirical experiments demonstrate the remarkably higher parameter efficiency of PRoLoRA.
arXiv Detail & Related papers (2024-02-24T13:39:05Z) - Federated Fine-tuning of Large Language Models under Heterogeneous Tasks and Client Resources [31.041608465716575]
Federated Learning (FL) has recently been applied to the parameter-efficient fine-tuning of Large Language Models (LLMs)
This study introduces FlexLoRA, a simple yet effective aggregation scheme for LLM fine-tuning.
arXiv Detail & Related papers (2024-02-18T08:32:59Z) - FedRA: A Random Allocation Strategy for Federated Tuning to Unleash the
Power of Heterogeneous Clients [50.13097183691517]
In real-world federated scenarios, there often exist a multitude of heterogeneous clients with varying computation and communication resources.
We propose a novel federated tuning algorithm, FedRA.
In each communication round, FedRA randomly generates an allocation matrix.
It reorganizes a small number of layers from the original model based on the allocation matrix and fine-tunes using adapters.
arXiv Detail & Related papers (2023-11-19T04:43:16Z) - pFedLoRA: Model-Heterogeneous Personalized Federated Learning with LoRA
Tuning [35.59830784463706]
Federated learning (FL) is an emerging machine learning paradigm in which a central server coordinates multiple participants (clients) collaboratively to train on decentralized data.
We propose a novel and efficient model-heterogeneous personalized Federated learning framework based on LoRA tuning (pFedLoRA)
Experiments on two benchmark datasets demonstrate that pFedLoRA outperforms six state-of-the-art baselines.
arXiv Detail & Related papers (2023-10-20T05:24:28Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - FedFM: Anchor-based Feature Matching for Data Heterogeneity in Federated
Learning [91.74206675452888]
We propose a novel method FedFM, which guides each client's features to match shared category-wise anchors.
To achieve higher efficiency and flexibility, we propose a FedFM variant, called FedFM-Lite, where clients communicate with server with fewer synchronization times and communication bandwidth costs.
arXiv Detail & Related papers (2022-10-14T08:11:34Z) - Specificity-Preserving Federated Learning for MR Image Reconstruction [94.58912814426122]
Federated learning can be used to improve data privacy and efficiency in magnetic resonance (MR) image reconstruction.
Recent FL techniques tend to solve this by enhancing the generalization of the global model.
We propose a specificity-preserving FL algorithm for MR image reconstruction (FedMRI)
arXiv Detail & Related papers (2021-12-09T22:13:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.