AugFL: Augmenting Federated Learning with Pretrained Models
- URL: http://arxiv.org/abs/2503.02154v1
- Date: Tue, 04 Mar 2025 00:37:33 GMT
- Title: AugFL: Augmenting Federated Learning with Pretrained Models
- Authors: Sheng Yue, Zerui Qin, Yongheng Deng, Ju Ren, Yaoxue Zhang, Junshan Zhang,
- Abstract summary: Federated Learning (FL) has garnered widespread interest in recent years.<n>In this paper, we consider a networked FL system formed by a central server and distributed clients.
- Score: 35.42275317522609
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) has garnered widespread interest in recent years. However, owing to strict privacy policies or limited storage capacities of training participants such as IoT devices, its effective deployment is often impeded by the scarcity of training data in practical decentralized learning environments. In this paper, we study enhancing FL with the aid of (large) pre-trained models (PMs), that encapsulate wealthy general/domain-agnostic knowledge, to alleviate the data requirement in conducting FL from scratch. Specifically, we consider a networked FL system formed by a central server and distributed clients. First, we formulate the PM-aided personalized FL as a regularization-based federated meta-learning problem, where clients join forces to learn a meta-model with knowledge transferred from a private PM stored at the server. Then, we develop an inexact-ADMM-based algorithm, AugFL, to optimize the problem with no need to expose the PM or incur additional computational costs to local clients. Further, we establish theoretical guarantees for AugFL in terms of communication complexity, adaptation performance, and the benefit of knowledge transfer in general non-convex cases. Extensive experiments corroborate the efficacy and superiority of AugFL over existing baselines.
Related papers
- Federated Unlearning Made Practical: Seamless Integration via Negated Pseudo-Gradients [3.12131298354022]
This paper introduces a novel method that leverages negated Pseudo-gradients Updates for Federated Unlearning (PUF)
Our approach only uses standard client model updates, anyway employed during regular FL rounds, and interprets them as pseudo-gradients.
Unlike state-of-the-art mechanisms, PUF seamlessly integrates with FL, incurs no additional computational and communication overhead beyond standard FL rounds, and supports concurrent unlearning requests.
arXiv Detail & Related papers (2025-04-08T09:05:33Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - FLrce: Resource-Efficient Federated Learning with Early-Stopping Strategy [7.963276533979389]
Federated Learning (FL) achieves great popularity in the Internet of Things (IoT)
We present FLrce, an efficient FL framework with a relationship-based client selection and early-stopping strategy.
Experiment results show that, compared with existing efficient FL frameworks, FLrce improves the computation and communication efficiency by at least 30% and 43% respectively.
arXiv Detail & Related papers (2023-10-15T10:13:44Z) - FedDBL: Communication and Data Efficient Federated Deep-Broad Learning
for Histopathological Tissue Classification [65.7405397206767]
We propose Federated Deep-Broad Learning (FedDBL) to achieve superior classification performance with limited training samples and only one-round communication.
FedDBL greatly outperforms the competitors with only one-round communication and limited training samples, while it even achieves comparable performance with the ones under multiple-round communications.
Since no data or deep model sharing across different clients, the privacy issue is well-solved and the model security is guaranteed with no model inversion attack risk.
arXiv Detail & Related papers (2023-02-24T14:27:41Z) - On the Design of Communication-Efficient Federated Learning for Health
Monitoring [21.433739206682404]
We propose a communication-efficient federated learning (CEFL) framework that involves clients clustering and transfer learning.
CEFL can save up to 98.45% in communication costs while conceding less than 3% in accuracy loss, when compared to the conventional FL.
arXiv Detail & Related papers (2022-11-30T12:52:23Z) - Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM [62.62684911017472]
Federated learning (FL) enables devices to jointly train shared models while keeping the training data local for privacy purposes.
We introduce a VFL framework with multiple heads (VIM), which takes the separate contribution of each client into account.
VIM achieves significantly higher performance and faster convergence compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-20T23:14:33Z) - On the Importance and Applicability of Pre-Training for Federated
Learning [28.238484580662785]
We conduct a systematic study to explore pre-training for federated learning.
We find that pre-training can improve FL, but also close its accuracy gap to the counterpart centralized learning.
We conclude our paper with an attempt to understand the effect of pre-training on FL.
arXiv Detail & Related papers (2022-06-23T06:02:33Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - FedComm: Federated Learning as a Medium for Covert Communication [56.376997104843355]
Federated Learning (FL) is a solution to mitigate the privacy implications related to the adoption of deep learning.
This paper thoroughly investigates the communication capabilities of an FL scheme.
We introduce FedComm, a novel multi-system covert-communication technique.
arXiv Detail & Related papers (2022-01-21T17:05:56Z) - Continual Local Training for Better Initialization of Federated Models [14.289213162030816]
Federated learning (FL) refers to the learning paradigm that trains machine learning models directly in decentralized systems.
The popular FL algorithm emphFederated Averaging (FedAvg) suffers from weight divergence.
We propose the local continual training strategy to address this problem.
arXiv Detail & Related papers (2020-05-26T12:27:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.