Personalized Federated Fine-tuning for Heterogeneous Data: An Automatic Rank Learning Approach via Two-Level LoRA
- URL: http://arxiv.org/abs/2503.03920v1
- Date: Wed, 05 Mar 2025 21:41:03 GMT
- Title: Personalized Federated Fine-tuning for Heterogeneous Data: An Automatic Rank Learning Approach via Two-Level LoRA
- Authors: Jie Hao, Yuman Wu, Ali Payani, Myungjin Lee, Mingrui Liu,
- Abstract summary: PF2LoRA is a new personalized federated fine-tuning algorithm built on a novel emphautomatic rank learning approach via two-level LoRA<n>Our experiments on natural language understanding and generation tasks demonstrate that PF2LoRA significantly outperforms existing federated fine-tuning methods.
- Score: 14.786030311860145
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the task of personalized federated fine-tuning with heterogeneous data in the context of language models, where clients collaboratively fine-tune a language model (e.g., BERT, GPT) without sharing their local data, achieving personalization simultaneously. While recent efforts have applied parameter-efficient fine-tuning techniques like low-rank adaptation (LoRA) in federated settings, they typically use single or multiple independent low-rank adapters with predefined maximal and minimal ranks, which may not be optimal for diverse data sources over clients. To address this issue, we propose PF2LoRA, a new personalized federated fine-tuning algorithm built on a novel \emph{automatic rank learning approach via two-level LoRA}. Given the pretrained language model whose weight is frozen, our algorithm aims to learn two levels of adaptation simultaneously: the first level aims to learn a common adapter for all clients, while the second level fosters individual client personalization. A key advantage of PF2LoRA is its ability to adaptively determine a suitable rank based on an individual client's data, rather than relying on a predefined rank that is agnostic to data heterogeneity. We present a synthetic example that highlights how PF2LoRA automatically learns the ground-truth rank for each client, tailoring the adaptation to match the properties of their individual data. Notably, this approach introduces minimal additional memory overhead, as the second-level adaptation comprises a small number of parameters compared to the first level. Our experiments on natural language understanding and generation tasks demonstrate that PF2LoRA significantly outperforms existing federated fine-tuning methods.
Related papers
- FedAWA: Adaptive Optimization of Aggregation Weights in Federated Learning Using Client Vectors [50.131271229165165]
Federated Learning (FL) has emerged as a promising framework for distributed machine learning.
Data heterogeneity resulting from differences across user behaviors, preferences, and device characteristics poses a significant challenge for federated learning.
We propose Adaptive Weight Aggregation (FedAWA), a novel method that adaptively adjusts aggregation weights based on client vectors during the learning process.
arXiv Detail & Related papers (2025-03-20T04:49:40Z) - Dual-Criterion Model Aggregation in Federated Learning: Balancing Data Quantity and Quality [0.0]
Federated learning (FL) has become one of the key methods for privacy-preserving collaborative learning.
An aggregation algorithm is recognized as one of the most crucial components for ensuring the efficacy and security of the system.
This study proposes a novel dual-criterion weighted aggregation algorithm involving the quantity and quality of data from the client node.
arXiv Detail & Related papers (2024-11-12T14:09:16Z) - Collaborative and Efficient Personalization with Mixtures of Adaptors [5.195669033269619]
We propose a parameter-efficient framework to tackle multi-task learning problems.
We call our framework Federated Low-Rank Adaptive Learning (FLoRAL)
We show promising experimental results on synthetic datasets and real-world federated multi-task problems.
arXiv Detail & Related papers (2024-10-04T15:11:15Z) - Personalized federated learning based on feature fusion [2.943623084019036]
Federated learning enables distributed clients to collaborate on training while storing their data locally to protect client privacy.
We propose a personalized federated learning approach called pFedPM.
In our process, we replace traditional gradient uploading with feature uploading, which helps reduce communication costs and allows for heterogeneous client models.
arXiv Detail & Related papers (2024-06-24T12:16:51Z) - MAP: Model Aggregation and Personalization in Federated Learning with Incomplete Classes [49.22075916259368]
In some real-world applications, data samples are usually distributed on local devices.
In this paper, we focus on a special kind of Non-I.I.D. scene where clients own incomplete classes.
Our proposed algorithm named MAP could simultaneously achieve the aggregation and personalization goals in FL.
arXiv Detail & Related papers (2024-04-14T12:22:42Z) - FedSelect: Personalized Federated Learning with Customized Selection of Parameters for Fine-Tuning [9.22574528776347]
FedSelect is a novel PFL algorithm inspired by the iterative subnetwork discovery procedure used for the Lottery Ticket Hypothesis.
We show that FedSelect outperforms recent state-of-the-art PFL algorithms under challenging client data heterogeneity settings.
arXiv Detail & Related papers (2024-04-03T05:36:21Z) - Communication-Efficient Personalized Federated Learning for Speech-to-Text Tasks [64.02867484165476]
To protect privacy and meet legal regulations, federated learning (FL) has gained significant attention for training speech-to-text (S2T) systems.<n>The commonly used FL approach (i.e., textscFedAvg) in S2T tasks typically suffers from extensive communication overhead.<n>We propose a personalized federated S2T framework that introduces textscFedLoRA, a lightweight LoRA module for client-side tuning and interaction with the server, and textscFedMem, a global model equipped with a $k$-near
arXiv Detail & Related papers (2024-01-18T15:39:38Z) - Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - Collaborative Chinese Text Recognition with Personalized Federated
Learning [61.34060587461462]
In Chinese text recognition, it is often necessary for one organization to collect a large amount of data from similar organizations.
Due to the natural presence of private information in text data, such as addresses and phone numbers, different organizations are unwilling to share private data.
We introduce personalized federated learning (pFL) into the Chinese text recognition task and propose the pFedCR algorithm.
arXiv Detail & Related papers (2023-05-09T16:51:00Z) - Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training [80.03567604524268]
Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
arXiv Detail & Related papers (2022-11-10T13:20:56Z) - Personalized Federated Learning with Exact Stochastic Gradient Descent [15.666401346622575]
We present a new approach for personalized benchmarks that achieves exact updates of gradientSGD descent.
We propose a novel SGD-type scheme where at each optimization round, randomly selected clients perform updates over their client-specific towards optimizing the loss function.
This allows to an exact minimization of the personalized parameters are performed by the clients and those of the common ones.
arXiv Detail & Related papers (2022-02-20T16:11:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.