Unbounded Gradients in Federated Leaning with Buffered Asynchronous
Aggregation
- URL: http://arxiv.org/abs/2210.01161v1
- Date: Mon, 3 Oct 2022 18:20:48 GMT
- Title: Unbounded Gradients in Federated Leaning with Buffered Asynchronous
Aggregation
- Authors: Mohammad Taha Toghani and C\'esar A. Uribe
- Abstract summary: The textitFedBuff algorithm allows asynchronous updates while preserving privacy via secure aggregation.
This paper presents a theoretical analysis of the convergence rate of this algorithm when heterogeneity in data, batch size, and delay are considered.
- Score: 0.6526824510982799
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Synchronous updates may compromise the efficiency of cross-device federated
learning once the number of active clients increases. The \textit{FedBuff}
algorithm (Nguyen et al., 2022) alleviates this problem by allowing
asynchronous updates (staleness), which enhances the scalability of training
while preserving privacy via secure aggregation. We revisit the
\textit{FedBuff} algorithm for asynchronous federated learning and extend the
existing analysis by removing the boundedness assumptions from the gradient
norm. This paper presents a theoretical analysis of the convergence rate of
this algorithm when heterogeneity in data, batch size, and delay are
considered.
Related papers
- DASA: Delay-Adaptive Multi-Agent Stochastic Approximation [64.32538247395627]
We consider a setting in which $N$ agents aim to speedup a common Approximation problem by acting in parallel and communicating with a central server.
To mitigate the effect of delays and stragglers, we propose textttDASA, a Delay-Adaptive algorithm for multi-agent Approximation.
arXiv Detail & Related papers (2024-03-25T22:49:56Z) - AsGrad: A Sharp Unified Analysis of Asynchronous-SGD Algorithms [45.90015262911875]
We analyze asynchronous-type algorithms for distributed SGD in the heterogeneous setting.
As a by-product of our analysis, we also demonstrate guarantees for gradient-type algorithms such as SGD with random tightness.
arXiv Detail & Related papers (2023-10-31T13:44:53Z) - Asynchronous Federated Learning with Bidirectional Quantized
Communications and Buffered Aggregation [39.057968279167966]
Asynchronous Federated Learning with Buffered Aggregation (FedBuff) is a state-of-the-art algorithm known for its efficiency and high scalability.
We present a new algorithm (QAFeL) with a quantization scheme that establishes a shared "hidden" state between the server and clients to avoid the error propagation caused by direct quantization.
arXiv Detail & Related papers (2023-08-01T03:50:58Z) - Semi-Synchronous Personalized Federated Learning over Mobile Edge
Networks [88.50555581186799]
We propose a semi-synchronous PFL algorithm, termed as Semi-Synchronous Personalized FederatedAveraging (PerFedS$2$), over mobile edge networks.
We derive an upper bound of the convergence rate of PerFedS2 in terms of the number of participants per global round and the number of rounds.
Experimental results verify the effectiveness of PerFedS2 in saving training time as well as guaranteeing the convergence of training loss.
arXiv Detail & Related papers (2022-09-27T02:12:43Z) - Time-triggered Federated Learning over Wireless Networks [48.389824560183776]
We present a time-triggered FL algorithm (TT-Fed) over wireless networks.
Our proposed TT-Fed algorithm improves the converged test accuracy by up to 12.5% and 5%, respectively.
arXiv Detail & Related papers (2022-04-26T16:37:29Z) - From Deterioration to Acceleration: A Calibration Approach to
Rehabilitating Step Asynchronism in Federated Optimization [13.755421424240048]
We propose a new algorithm textttFedaGrac, which calibrates the local direction to a predictive global orientation.
We theoretically prove that textttFedaGrac holds an improved order of convergence rate than the state-of-the-art approaches.
arXiv Detail & Related papers (2021-12-17T07:26:31Z) - Asynchronous Semi-Decentralized Federated Edge Learning for
Heterogeneous Clients [3.983055670167878]
Federated edge learning (FEEL) has drawn much attention as a privacy-preserving distributed learning framework for mobile edge networks.
In this work, we investigate a novel semi-decentralized FEEL (SD-FEEL) architecture where multiple edge servers collaborate to incorporate more data from edge devices in training.
arXiv Detail & Related papers (2021-12-09T07:39:31Z) - Asynchronous Iterations in Optimization: New Sequence Results and
Sharper Algorithmic Guarantees [10.984101749941471]
We introduce novel convergence results for asynchronous iterations that appear in the analysis of parallel and distributed optimization algorithms.
Results are simple to apply and give explicit estimates for how the degree of asynchrony impacts the convergence rates of the iterates.
arXiv Detail & Related papers (2021-09-09T19:08:56Z) - Distributed stochastic optimization with large delays [59.95552973784946]
One of the most widely used methods for solving large-scale optimization problems is distributed asynchronous gradient descent (DASGD)
We show that DASGD converges to a global optimal implementation model under same delay assumptions.
arXiv Detail & Related papers (2021-07-06T21:59:49Z) - Improved Branch and Bound for Neural Network Verification via Lagrangian
Decomposition [161.09660864941603]
We improve the scalability of Branch and Bound (BaB) algorithms for formally proving input-output properties of neural networks.
We present a novel activation-based branching strategy and a BaB framework, named Branch and Dual Network Bound (BaDNB)
BaDNB outperforms previous complete verification systems by a large margin, cutting average verification times by factors up to 50 on adversarial properties.
arXiv Detail & Related papers (2021-04-14T09:22:42Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.