FAVANO: Federated AVeraging with Asynchronous NOdes
- URL: http://arxiv.org/abs/2305.16099v2
- Date: Wed, 22 Nov 2023 19:52:37 GMT
- Title: FAVANO: Federated AVeraging with Asynchronous NOdes
- Authors: Louis Leconte, Van Minh Nguyen, Eric Moulines
- Abstract summary: We propose a novel centralized Asynchronous Federated Learning (FL) framework, FAVANO, for training Deep Neural Networks (DNNs) in resource constrained environments.
- Score: 14.412305295989444
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a novel centralized Asynchronous Federated Learning
(FL) framework, FAVANO, for training Deep Neural Networks (DNNs) in
resource-constrained environments. Despite its popularity, ``classical''
federated learning faces the increasingly difficult task of scaling synchronous
communication over large wireless networks. Moreover, clients typically have
different computing resources and therefore computing speed, which can lead to
a significant bias (in favor of ``fast'' clients) when the updates are
asynchronous. Therefore, practical deployment of FL requires to handle users
with strongly varying computing speed in communication/resource constrained
setting. We provide convergence guarantees for FAVANO in a smooth, non-convex
environment and carefully compare the obtained convergence guarantees with
existing bounds, when they are available. Experimental results show that the
FAVANO algorithm outperforms current methods on standard benchmarks.
Related papers
- FedRTS: Federated Robust Pruning via Combinatorial Thompson Sampling [12.067872131025231]
Federated Learning (FL) enables collaborative model training across distributed clients without data sharing.
Current methods use dynamic pruning to improve efficiency by periodically adjusting sparse model topologies while maintaining sparsity.
We propose Federated Robust pruning via Thompson Sampling (FedRTS), a novel framework designed to develop robust sparse models.
arXiv Detail & Related papers (2025-01-31T13:26:22Z) - Communication-Efficient Federated Learning by Quantized Variance Reduction for Heterogeneous Wireless Edge Networks [55.467288506826755]
Federated learning (FL) has been recognized as a viable solution for local-privacy-aware collaborative model training in wireless edge networks.
Most existing communication-efficient FL algorithms fail to reduce the significant inter-device variance.
We propose a novel communication-efficient FL algorithm, named FedQVR, which relies on a sophisticated variance-reduced scheme.
arXiv Detail & Related papers (2025-01-20T04:26:21Z) - Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.
We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - Asynchronous Federated Learning: A Scalable Approach for Decentralized Machine Learning [0.9208007322096533]
Federated Learning (FL) has emerged as a powerful paradigm for decentralized machine learning, enabling collaborative model training across diverse clients without sharing raw data.
Traditional FL approaches often face limitations in scalability and efficiency due to their reliance on synchronous client updates.
We propose an Asynchronous Federated Learning (AFL) algorithm, which allows clients to update the global model independently and asynchronously.
arXiv Detail & Related papers (2024-12-23T17:11:02Z) - Communication-Efficient Federated Learning With Data and Client
Heterogeneity [22.432529149142976]
Federated Learning (FL) enables large-scale distributed training of machine learning models.
executing FL at scale comes with inherent practical challenges.
We present the first variant of the classic federated averaging (FedAvg) algorithm.
arXiv Detail & Related papers (2022-06-20T22:39:39Z) - Time-triggered Federated Learning over Wireless Networks [48.389824560183776]
We present a time-triggered FL algorithm (TT-Fed) over wireless networks.
Our proposed TT-Fed algorithm improves the converged test accuracy by up to 12.5% and 5%, respectively.
arXiv Detail & Related papers (2022-04-26T16:37:29Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z) - Faster Non-Convex Federated Learning via Global and Local Momentum [57.52663209739171]
textttFedGLOMO is the first (first-order) FLtexttFedGLOMO algorithm.
Our algorithm is provably optimal even with communication between the clients and the server.
arXiv Detail & Related papers (2020-12-07T21:05:31Z) - Asynchronous Federated Learning with Reduced Number of Rounds and with
Differential Privacy from Less Aggregated Gaussian Noise [26.9902939745173]
We propose a new algorithm for asynchronous federated learning which eliminates waiting times and reduces overall network communication.
We provide rigorous theoretical analysis for strongly convex objective functions and provide simulation results.
arXiv Detail & Related papers (2020-07-17T19:47:16Z) - Asynchronous Decentralized Learning of a Neural Network [49.15799302636519]
We exploit an asynchronous computing framework namely ARock to learn a deep neural network called self-size estimating feedforward neural network (SSFN) in a decentralized scenario.
Asynchronous decentralized SSFN relaxes the communication bottleneck by allowing one node activation and one side communication, which reduces the communication overhead significantly.
We compare asynchronous dSSFN with traditional synchronous dSSFN in the experimental results, which shows the competitive performance of asynchronous dSSFN, especially when the communication network is sparse.
arXiv Detail & Related papers (2020-04-10T15:53:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.