Latency Aware Semi-synchronous Client Selection and Model Aggregation
for Wireless Federated Learning
- URL: http://arxiv.org/abs/2210.10311v1
- Date: Wed, 19 Oct 2022 05:59:22 GMT
- Title: Latency Aware Semi-synchronous Client Selection and Model Aggregation
for Wireless Federated Learning
- Authors: Liangkun Yu, Xiang Sun, Rana Albelaihi, Chen Yi
- Abstract summary: Federated learning (FL) is a collaborative machine learning framework that requires different clients (e.g., Internet of Things devices) to participate in the machine learning model training process.
Traditional FL process may suffer from the straggler problem in heterogeneous client settings.
We propose a Semisynchronous-client Selection and mOdel aggregation aggregation for federated learNing (LESSON) method that allows all the clients to participate in the whole FL process but with different frequencies.
- Score: 0.6882042556551609
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is a collaborative machine learning framework that
requires different clients (e.g., Internet of Things devices) to participate in
the machine learning model training process by training and uploading their
local models to an FL server in each global iteration. Upon receiving the local
models from all the clients, the FL server generates a global model by
aggregating the received local models. This traditional FL process may suffer
from the straggler problem in heterogeneous client settings, where the FL
server has to wait for slow clients to upload their local models in each global
iteration, thus increasing the overall training time. One of the solutions is
to set up a deadline and only the clients that can upload their local models
before the deadline would be selected in the FL process. This solution may lead
to a slow convergence rate and global model overfitting issues due to the
limited client selection. In this paper, we propose the Latency awarE
Semi-synchronous client Selection and mOdel aggregation for federated learNing
(LESSON) method that allows all the clients to participate in the whole FL
process but with different frequencies. That is, faster clients would be
scheduled to upload their models more frequently than slow clients, thus
resolving the straggler problem and accelerating the convergence speed, while
avoiding model overfitting. Also, LESSON is capable of adjusting the tradeoff
between the model accuracy and convergence rate by varying the deadline.
Extensive simulations have been conducted to compare the performance of LESSON
with the other two baseline methods, i.e., FedAvg and FedCS. The simulation
results demonstrate that LESSON achieves faster convergence speed than FedAvg
and FedCS, and higher model accuracy than FedCS.
Related papers
- FedAR: Addressing Client Unavailability in Federated Learning with Local Update Approximation and Rectification [8.747592727421596]
Federated learning (FL) enables clients to collaboratively train machine learning models under the coordination of a server.
FedAR can get all clients involved in the global model update to achieve a high-quality global model on the server.
FedAR also depicts impressive performance in the presence of a large number of clients with severe client unavailability.
arXiv Detail & Related papers (2024-07-26T21:56:52Z) - FedAST: Federated Asynchronous Simultaneous Training [27.492821176616815]
Federated Learning (FL) enables devices or clients to collaboratively train machine learning (ML) models without sharing their private data.
Much of the existing work in FL focuses on efficiently learning a model for a single task.
In this paper, we propose simultaneous training of multiple FL models using a common set of datasets.
arXiv Detail & Related papers (2024-06-01T05:14:20Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Achieving Linear Speedup in Asynchronous Federated Learning with
Heterogeneous Clients [30.135431295658343]
Federated learning (FL) aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients.
In this paper, we propose an efficient federated learning (AFL) framework called DeFedAvg.
DeFedAvg is the first AFL algorithm that achieves the desirable linear speedup property, which indicates its high scalability.
arXiv Detail & Related papers (2024-02-17T05:22:46Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Client Selection for Generalization in Accelerated Federated Learning: A
Multi-Armed Bandit Approach [20.300740276237523]
Federated learning (FL) is an emerging machine learning (ML) paradigm used to train models across multiple nodes (i.e., clients) holding local data sets.
We develop a novel algorithm to achieve this goal, dubbed Bandit Scheduling for FL (BSFL)
arXiv Detail & Related papers (2023-03-18T09:45:58Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Aergia: Leveraging Heterogeneity in Federated Learning Systems [5.0650178943079]
Federated Learning (FL) relies on clients to update a global model using their local datasets.
Aergia is a novel approach where slow clients freeze the part of their model that is the most computationally intensive to train.
Aergia significantly reduces the training time under heterogeneous settings by up to 27% and 53% compared to FedAvg and TiFL, respectively.
arXiv Detail & Related papers (2022-10-12T12:59:18Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - No One Left Behind: Inclusive Federated Learning over Heterogeneous
Devices [79.16481453598266]
We propose InclusiveFL, a client-inclusive federated learning method to handle this problem.
The core idea of InclusiveFL is to assign models of different sizes to clients with different computing capabilities.
We also propose an effective method to share the knowledge among multiple local models with different sizes.
arXiv Detail & Related papers (2022-02-16T13:03:27Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.