FAVANO: Federated AVeraging with Asynchronous NOdes
- URL: http://arxiv.org/abs/2305.16099v2
- Date: Wed, 22 Nov 2023 19:52:37 GMT
- Title: FAVANO: Federated AVeraging with Asynchronous NOdes
- Authors: Louis Leconte, Van Minh Nguyen, Eric Moulines
- Abstract summary: We propose a novel centralized Asynchronous Federated Learning (FL) framework, FAVANO, for training Deep Neural Networks (DNNs) in resource constrained environments.
- Score: 14.412305295989444
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a novel centralized Asynchronous Federated Learning
(FL) framework, FAVANO, for training Deep Neural Networks (DNNs) in
resource-constrained environments. Despite its popularity, ``classical''
federated learning faces the increasingly difficult task of scaling synchronous
communication over large wireless networks. Moreover, clients typically have
different computing resources and therefore computing speed, which can lead to
a significant bias (in favor of ``fast'' clients) when the updates are
asynchronous. Therefore, practical deployment of FL requires to handle users
with strongly varying computing speed in communication/resource constrained
setting. We provide convergence guarantees for FAVANO in a smooth, non-convex
environment and carefully compare the obtained convergence guarantees with
existing bounds, when they are available. Experimental results show that the
FAVANO algorithm outperforms current methods on standard benchmarks.
Related papers
- Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Achieving Linear Speedup in Asynchronous Federated Learning with
Heterogeneous Clients [30.135431295658343]
Federated learning (FL) aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients.
In this paper, we propose an efficient federated learning (AFL) framework called DeFedAvg.
DeFedAvg is the first AFL algorithm that achieves the desirable linear speedup property, which indicates its high scalability.
arXiv Detail & Related papers (2024-02-17T05:22:46Z) - Semi-Synchronous Personalized Federated Learning over Mobile Edge
Networks [88.50555581186799]
We propose a semi-synchronous PFL algorithm, termed as Semi-Synchronous Personalized FederatedAveraging (PerFedS$2$), over mobile edge networks.
We derive an upper bound of the convergence rate of PerFedS2 in terms of the number of participants per global round and the number of rounds.
Experimental results verify the effectiveness of PerFedS2 in saving training time as well as guaranteeing the convergence of training loss.
arXiv Detail & Related papers (2022-09-27T02:12:43Z) - Communication-Efficient Federated Learning With Data and Client
Heterogeneity [22.432529149142976]
Federated Learning (FL) enables large-scale distributed training of machine learning models.
executing FL at scale comes with inherent practical challenges.
We present the first variant of the classic federated averaging (FedAvg) algorithm.
arXiv Detail & Related papers (2022-06-20T22:39:39Z) - Time-triggered Federated Learning over Wireless Networks [48.389824560183776]
We present a time-triggered FL algorithm (TT-Fed) over wireless networks.
Our proposed TT-Fed algorithm improves the converged test accuracy by up to 12.5% and 5%, respectively.
arXiv Detail & Related papers (2022-04-26T16:37:29Z) - Blockchain-enabled Server-less Federated Learning [5.065631761462706]
We focus on an asynchronous server-less Federated Learning solution empowered by (BC) technology.
In contrast to mostly adopted FL approaches, we advocate an asynchronous method whereby model aggregation is done as clients submit their local updates.
arXiv Detail & Related papers (2021-12-15T07:41:23Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z) - Faster Non-Convex Federated Learning via Global and Local Momentum [57.52663209739171]
textttFedGLOMO is the first (first-order) FLtexttFedGLOMO algorithm.
Our algorithm is provably optimal even with communication between the clients and the server.
arXiv Detail & Related papers (2020-12-07T21:05:31Z) - Asynchronous Federated Learning with Reduced Number of Rounds and with
Differential Privacy from Less Aggregated Gaussian Noise [26.9902939745173]
We propose a new algorithm for asynchronous federated learning which eliminates waiting times and reduces overall network communication.
We provide rigorous theoretical analysis for strongly convex objective functions and provide simulation results.
arXiv Detail & Related papers (2020-07-17T19:47:16Z) - Asynchronous Decentralized Learning of a Neural Network [49.15799302636519]
We exploit an asynchronous computing framework namely ARock to learn a deep neural network called self-size estimating feedforward neural network (SSFN) in a decentralized scenario.
Asynchronous decentralized SSFN relaxes the communication bottleneck by allowing one node activation and one side communication, which reduces the communication overhead significantly.
We compare asynchronous dSSFN with traditional synchronous dSSFN in the experimental results, which shows the competitive performance of asynchronous dSSFN, especially when the communication network is sparse.
arXiv Detail & Related papers (2020-04-10T15:53:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.