Pisces: Efficient Federated Learning via Guided Asynchronous Training
- URL: http://arxiv.org/abs/2206.09264v1
- Date: Sat, 18 Jun 2022 18:25:30 GMT
- Title: Pisces: Efficient Federated Learning via Guided Asynchronous Training
- Authors: Zhifeng Jiang, Wei Wang, Baochun Li, Bo Li
- Abstract summary: Federated learning (FL) is typically performed in a synchronous parallel manner, where the involvement of a slow client delays a training iteration.
Current FL employ a participant selection strategy to select fast clients with quality data in each iteration.
We present Pisces, an asynchronous FL system with intelligent participant selection and model aggregation for possible training.
- Score: 42.46549526793953
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is typically performed in a synchronous parallel
manner, where the involvement of a slow client delays a training iteration.
Current FL systems employ a participant selection strategy to select fast
clients with quality data in each iteration. However, this is not always
possible in practice, and the selection strategy often has to navigate an
unpleasant trade-off between the speed and the data quality of clients.
In this paper, we present Pisces, an asynchronous FL system with intelligent
participant selection and model aggregation for accelerated training. To avoid
incurring excessive resource cost and stale training computation, Pisces uses a
novel scoring mechanism to identify suitable clients to participate in a
training iteration. It also adapts the pace of model aggregation to dynamically
bound the progress gap between the selected clients and the server, with a
provable convergence guarantee in a smooth non-convex setting. We have
implemented Pisces in an open-source FL platform called Plato, and evaluated
its performance in large-scale experiments with popular vision and language
models. Pisces outperforms the state-of-the-art synchronous and asynchronous
schemes, accelerating the time-to-accuracy by up to 2.0x and 1.9x,
respectively.
Related papers
- FedAST: Federated Asynchronous Simultaneous Training [27.492821176616815]
Federated Learning (FL) enables devices or clients to collaboratively train machine learning (ML) models without sharing their private data.
Much of the existing work in FL focuses on efficiently learning a model for a single task.
In this paper, we propose simultaneous training of multiple FL models using a common set of datasets.
arXiv Detail & Related papers (2024-06-01T05:14:20Z) - Prune at the Clients, Not the Server: Accelerated Sparse Training in Federated Learning [56.21666819468249]
Resource constraints of clients and communication costs pose major problems for training large models in Federated Learning.
We introduce Sparse-ProxSkip, which combines training and acceleration in a sparse setting.
We demonstrate the good performance of Sparse-ProxSkip in extensive experiments.
arXiv Detail & Related papers (2024-05-31T05:21:12Z) - Achieving Linear Speedup in Asynchronous Federated Learning with
Heterogeneous Clients [30.135431295658343]
Federated learning (FL) aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients.
In this paper, we propose an efficient federated learning (AFL) framework called DeFedAvg.
DeFedAvg is the first AFL algorithm that achieves the desirable linear speedup property, which indicates its high scalability.
arXiv Detail & Related papers (2024-02-17T05:22:46Z) - Asynchronous Wireless Federated Learning with Probabilistic Client
Selection [20.882840344104135]
Federated learning (FL) is a promising distributed learning framework where clients collaboratively train a machine learning model coordinated by a server.
We consider that each client keeps local updates and probabilistically transmits the local model.
We develop an iterative algorithm to solve the non probabilistic convergence problem optimally globally.
arXiv Detail & Related papers (2023-11-28T12:39:34Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Effectively Heterogeneous Federated Learning: A Pairing and Split
Learning Based Approach [16.093068118849246]
This paper presents a novel split federated learning (SFL) framework that pairs clients with different computational resources.
A greedy algorithm is proposed by reconstructing the optimization of training latency as a graph edge selection problem.
Simulation results show the proposed method can significantly improve the FL training speed and achieve high performance.
arXiv Detail & Related papers (2023-08-26T11:10:54Z) - TimelyFL: Heterogeneity-aware Asynchronous Federated Learning with
Adaptive Partial Training [17.84692242938424]
TimelyFL is a heterogeneous-aware asynchronous Federated Learning framework with adaptive partial training.
We show that TimelyFL improves participation rate by 21.13%, harvests 1.28x - 2.89x more efficiency on convergence rate, and provides a 6.25% increment on test accuracy.
arXiv Detail & Related papers (2023-04-14T06:26:08Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.