pFedLoRA: Model-Heterogeneous Personalized Federated Learning with LoRA
Tuning
- URL: http://arxiv.org/abs/2310.13283v2
- Date: Sun, 11 Feb 2024 11:38:38 GMT
- Title: pFedLoRA: Model-Heterogeneous Personalized Federated Learning with LoRA
Tuning
- Authors: Liping Yi, Han Yu, Gang Wang, Xiaoguang Liu, Xiaoxiao Li
- Abstract summary: Federated learning (FL) is an emerging machine learning paradigm in which a central server coordinates multiple participants (clients) collaboratively to train on decentralized data.
We propose a novel and efficient model-heterogeneous personalized Federated learning framework based on LoRA tuning (pFedLoRA)
Experiments on two benchmark datasets demonstrate that pFedLoRA outperforms six state-of-the-art baselines.
- Score: 35.59830784463706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is an emerging machine learning paradigm in which a
central server coordinates multiple participants (clients) collaboratively to
train on decentralized data. In practice, FL often faces statistical, system,
and model heterogeneities, which inspires the field of Model-Heterogeneous
Personalized Federated Learning (MHPFL). With the increased interest in
adopting large language models (LLMs) in FL, the existing MHPFL methods cannot
achieve acceptable computational and communication costs, while maintaining
satisfactory model performance. To bridge this gap, we propose a novel and
efficient model-heterogeneous personalized Federated learning framework based
on LoRA tuning (pFedLoRA). Inspired by the popular LoRA method for fine-tuning
pre-trained LLMs with a low-rank model (a.k.a., an adapter), we design a
homogeneous small adapter to facilitate federated client's heterogeneous local
model training with our proposed iterative training for global-local knowledge
exchange. The homogeneous small local adapters are aggregated on the FL server
to generate a global adapter. We theoretically prove the convergence of
pFedLoRA. Extensive experiments on two benchmark datasets demonstrate that
pFedLoRA outperforms six state-of-the-art baselines, beating the best method by
1.35% in test accuracy, 11.81 times computation overhead reduction and 7.41
times communication cost saving.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.