Federated Low-Rank Adaptation for Foundation Models: A Survey
- URL: http://arxiv.org/abs/2505.13502v1
- Date: Fri, 16 May 2025 07:19:51 GMT
- Title: Federated Low-Rank Adaptation for Foundation Models: A Survey
- Authors: Yiyuan Yang, Guodong Long, Qinghua Lu, Liming Zhu, Jing Jiang, Chengqi Zhang,
- Abstract summary: Low-Rank Adaptation (LoRA) offers a resource-efficient alternative for fine-tuning foundation models by dramatically reducing the number of trainable parameters.<n>This survey examines how LoRA has been integrated into federated fine-tuning for foundation models.
- Score: 43.891813267708265
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Effectively leveraging private datasets remains a significant challenge in developing foundation models. Federated Learning (FL) has recently emerged as a collaborative framework that enables multiple users to fine-tune these models while mitigating data privacy risks. Meanwhile, Low-Rank Adaptation (LoRA) offers a resource-efficient alternative for fine-tuning foundation models by dramatically reducing the number of trainable parameters. This survey examines how LoRA has been integrated into federated fine-tuning for foundation models, an area we term FedLoRA, by focusing on three key challenges: distributed learning, heterogeneity, and efficiency. We further categorize existing work based on the specific methods used to address each challenge. Finally, we discuss open research questions and highlight promising directions for future investigation, outlining the next steps for advancing FedLoRA.
Related papers
- Beyond Policy Optimization: A Data Curation Flywheel for Sparse-Reward Long-Horizon Planning [15.103861901247125]
We propose a three-stage framework to develop robust reasoning models for sparse environments.<n>Our framework bootstraps efficient reasoning using the proposed planning quaternions with long-short chain-of-thought fusion.<n>Experiments on ALFWorld, ScienceWorld, and WebShop demonstrate that our approach achieves state-of-the-art with significant token efficiency.
arXiv Detail & Related papers (2025-08-05T02:56:58Z) - Towards One-shot Federated Learning: Advances, Challenges, and Future Directions [7.4943359806654435]
One-shot FL enables collaborative training in a single round, eliminating the need for iterative communication.<n>One-shot FL supports resource-limited devices by enabling single-round model aggregation while maintaining data locality.
arXiv Detail & Related papers (2025-05-05T07:46:21Z) - A Survey on Federated Fine-tuning of Large Language Models [17.79395946441051]
Federated Learning (FL) offers a promising approach that enables collaborative model adaptation while ensuring data privacy.<n>We first trace the historical evolution of both Large Language Models (LLMs) and FL, while summarizing relevant prior surveys.<n>Following this, we conduct an extensive study of existing parameter-efficient fine-tuning (PEFT) methods and explore their applicability in FL.<n>Finally, we identify critical open challenges and outline promising research directions to drive future advancements in FedLLM.
arXiv Detail & Related papers (2025-03-15T06:52:10Z) - Low-Rank Adaptation for Foundation Models: A Comprehensive Review [42.23155921954156]
Low-Rank Adaptation (LoRA) has emerged as a highly promising approach for mitigating these challenges.<n>This survey provides the first comprehensive review of LoRA techniques beyond large Language Models to general foundation models.
arXiv Detail & Related papers (2024-12-31T09:38:55Z) - FedMAC: Tackling Partial-Modality Missing in Federated Learning with Cross-Modal Aggregation and Contrastive Regularization [11.954904313477176]
Federated Learning (FL) is a method for training machine learning models using distributed data sources.
This study proposes a novel framework named FedMAC, designed to address multi-modality missing under conditions of partial-modality missing in FL.
arXiv Detail & Related papers (2024-10-04T01:24:02Z) - Vertical Federated Learning for Effectiveness, Security, Applicability: A Survey [67.48187503803847]
Vertical Federated Learning (VFL) is a privacy-preserving distributed learning paradigm.
Recent research has shown promising results addressing various challenges in VFL.
This survey offers a systematic overview of recent developments.
arXiv Detail & Related papers (2024-05-25T16:05:06Z) - Fine-Tuning Language Models with Reward Learning on Policy [68.70065254564642]
Reinforcement learning from human feedback (RLHF) has emerged as an effective approach to aligning large language models (LLMs) to human preferences.
Despite its popularity, (fixed) reward models may suffer from inaccurate off-distribution.
We propose reward learning on policy (RLP), an unsupervised framework that refines a reward model using policy samples to keep it on-distribution.
arXiv Detail & Related papers (2024-03-28T10:02:10Z) - Dual-Personalizing Adapter for Federated Foundation Models [35.863585349109385]
We propose a Federated Dual-Personalizing Adapter architecture to handle test-time distribution shifts simultaneously.<n>The effectiveness of the proposed method has been evaluated on benchmark datasets across different NLP tasks.
arXiv Detail & Related papers (2024-03-28T08:19:33Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - A Comprehensive Survey on Source-free Domain Adaptation [69.17622123344327]
The research of Source-Free Domain Adaptation (SFDA) has drawn growing attention in recent years.
We provide a comprehensive survey of recent advances in SFDA and organize them into a unified categorization scheme.
We compare the results of more than 30 representative SFDA methods on three popular classification benchmarks.
arXiv Detail & Related papers (2023-02-23T06:32:09Z) - Exploring Neural Models for Query-Focused Summarization [74.41256438059256]
We conduct a systematic exploration of neural approaches to query-focused summarization (QFS)
We present two model extensions that achieve state-of-the-art performance on the QMSum dataset by a margin of up to 3.38 ROUGE-1, 3.72 ROUGE-2, and 3.28 ROUGE-L.
arXiv Detail & Related papers (2021-12-14T18:33:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.