FedHFT: Efficient Federated Finetuning with Heterogeneous Edge Clients
- URL: http://arxiv.org/abs/2510.14054v1
- Date: Wed, 15 Oct 2025 19:47:51 GMT
- Title: FedHFT: Efficient Federated Finetuning with Heterogeneous Edge Clients
- Authors: Fatih Ilhan, Selim Furkan Tekin, Tiansheng Huang, Gaowen Liu, Ramana Kompella, Greg Eisenhauer, Yingyan Celine Lin, Calton Pu, Ling Liu,
- Abstract summary: Fine-tuning pre-trained large language models (LLMs) has become a common practice for personalized natural language understanding (NLU) applications.<n>There are two main challenges: (i) limited and/or heterogeneous data for fine-tuning due to proprietary data confidentiality or privacy requirements, and (ii) varying resources available across participating clients such as edge devices.<n>This paper presents FedHFT - an efficient and personalized federated fine-tuning framework to address both challenges.
- Score: 37.60974510385015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fine-tuning pre-trained large language models (LLMs) has become a common practice for personalized natural language understanding (NLU) applications on downstream tasks and domain-specific datasets. However, there are two main challenges: (i) limited and/or heterogeneous data for fine-tuning due to proprietary data confidentiality or privacy requirements, and (ii) varying computation resources available across participating clients such as edge devices. This paper presents FedHFT - an efficient and personalized federated fine-tuning framework to address both challenges. First, we introduce a mixture of masked adapters to handle resource heterogeneity across participating clients, enabling high-performance collaborative fine-tuning of pre-trained language model(s) across multiple clients in a distributed setting, while keeping proprietary data local. Second, we introduce a bi-level optimization approach to handle non-iid data distribution based on masked personalization and client clustering. Extensive experiments demonstrate significant performance and efficiency improvements over various natural language understanding tasks under data and resource heterogeneity compared to representative heterogeneous federated learning methods.
Related papers
- FedPPA: Progressive Parameter Alignment for Personalized Federated Learning [0.9931624906346306]
Federated Learning (FL) is designed as a decentralized, privacy-preserving machine learning paradigm that enables multiple clients to collaboratively train a model without sharing their data.<n>In real-world scenarios, clients often have heterogeneous computational resources and hold non-independent and identically distributed data (non-IID), which poses significant challenges during training.<n>We propose Progressive Alignment (FedPPA), which progressively aligns the weights of common layers across clients with the global model's weights.<n>Experiments on three image classification datasets, including MNIST, FMNIST, and CIFAR-10, demonstrate that FedPPA consistently outperforms existing FL algorithms,
arXiv Detail & Related papers (2025-10-16T14:03:05Z) - Not All Clients Are Equal: Collaborative Model Personalization on Heterogeneous Multi-Modal Clients [59.52341877720199]
We propose FedMosaic, a method that enables knowledge sharing across heterogeneous architectures without huge computational cost.<n>To mimic the real-world task diversity, we propose a multi-modal PFL benchmark spanning 40 distinct tasks with distribution shifts over time.<n>The empirical study shows that FedMosaic outperforms the state-of-the-art PFL methods.
arXiv Detail & Related papers (2025-05-20T09:17:07Z) - Client Selection in Federated Learning with Data Heterogeneity and Network Latencies [19.161254709653914]
Federated learning (FL) is a distributed machine learning paradigm where multiple clients conduct local training based on their private data, then the updated models are sent to a central server for global aggregation.<n>In this paper, we propose two novel theoretically optimal client selection schemes that handle both these heterogeneities.
arXiv Detail & Related papers (2025-04-02T17:31:15Z) - Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.<n>We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - FLASH: Federated Learning Across Simultaneous Heterogeneities [55.0981921695672]
FLASH(Federated Learning Across Simultaneous Heterogeneities) is a lightweight and flexible client selection algorithm.<n>It outperforms state-of-the-art FL frameworks under extensive sources of Heterogeneities.<n>It achieves substantial and consistent improvements over state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-13T20:04:39Z) - Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - Privacy Preserving Bayesian Federated Learning in Heterogeneous Settings [20.33482170846688]
This paper presents a unified federated learning framework based on customized local Bayesian models that learn well even in the absence of large local datasets.
We use priors in the functional (output) space of the networks to facilitate collaboration across heterogeneous clients.
Experiments on standard FL datasets demonstrate that our approach outperforms strong baselines in both homogeneous and heterogeneous settings.
arXiv Detail & Related papers (2023-06-13T17:55:30Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - FedADMM: A Robust Federated Deep Learning Framework with Adaptivity to
System Heterogeneity [4.2059108111562935]
Federated Learning (FL) is an emerging framework for distributed processing of large data volumes by edge devices.
In this paper, we introduce a new FLAD FedADMM based protocol.
We show that FedADMM consistently outperforms all baseline methods in terms of communication efficiency.
arXiv Detail & Related papers (2022-04-07T15:58:33Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.