Semi-Decentralized Federated Learning with Collaborative Relaying
- URL: http://arxiv.org/abs/2205.10998v1
- Date: Mon, 23 May 2022 02:16:53 GMT
- Title: Semi-Decentralized Federated Learning with Collaborative Relaying
- Authors: Michal Yemini, Rajarshi Saha, Emre Ozfatura, Deniz G\"und\"uz, Andrea
J. Goldsmith
- Abstract summary: We present a semi-decentralized federated learning algorithm wherein clients collaborate by relaying their neighbors' local updates to a central parameter server (PS)
We appropriately optimize these averaging weights to ensure that the global update at the PS is unbiased and to reduce the variance of the global update at the PS.
- Score: 27.120495678791883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a semi-decentralized federated learning algorithm wherein clients
collaborate by relaying their neighbors' local updates to a central parameter
server (PS). At every communication round to the PS, each client computes a
local consensus of the updates from its neighboring clients and eventually
transmits a weighted average of its own update and those of its neighbors to
the PS. We appropriately optimize these averaging weights to ensure that the
global update at the PS is unbiased and to reduce the variance of the global
update at the PS, consequently improving the rate of convergence. Numerical
simulations substantiate our theoretical claims and demonstrate settings with
intermittent connectivity between the clients and the PS, where our proposed
algorithm shows an improved convergence rate and accuracy in comparison with
the federated averaging algorithm.
Related papers
- Boosting the Performance of Decentralized Federated Learning via Catalyst Acceleration [66.43954501171292]
We introduce Catalyst Acceleration and propose an acceleration Decentralized Federated Learning algorithm called DFedCata.
DFedCata consists of two main components: the Moreau envelope function, which addresses parameter inconsistencies, and Nesterov's extrapolation step, which accelerates the aggregation phase.
Empirically, we demonstrate the advantages of the proposed algorithm in both convergence speed and generalization performance on CIFAR10/100 with various non-iid data distributions.
arXiv Detail & Related papers (2024-10-09T06:17:16Z) - Aiding Global Convergence in Federated Learning via Local Perturbation and Mutual Similarity Information [6.767885381740953]
Federated learning has emerged as a distributed optimization paradigm.
We propose a novel modified framework wherein each client locally performs a perturbed gradient step.
We show that our algorithm speeds convergence up to a margin of 30 global rounds compared with FedAvg.
arXiv Detail & Related papers (2024-10-07T23:14:05Z) - Networked Communication for Mean-Field Games with Function Approximation and Empirical Mean-Field Estimation [59.01527054553122]
Decentralised agents can learn equilibria in Mean-Field Games from a single, non-episodic run of the empirical system.
We introduce function approximation to the existing setting, drawing on the Munchausen Online Mirror Descent method.
We additionally provide new algorithms that allow agents to estimate the global empirical distribution based on a local neighbourhood.
arXiv Detail & Related papers (2024-08-21T13:32:46Z) - Straggler-Resilient Decentralized Learning via Adaptive Asynchronous Updates [28.813671194939225]
fully decentralized optimization methods have been advocated as alternatives to the popular parameter server framework.
We propose a fully decentralized algorithm with adaptive asynchronous updates via adaptively determining the number of neighbor workers for each worker to communicate with.
We show that DSGD-AAU achieves a linear speedup for convergence and demonstrate its effectiveness via extensive experiments.
arXiv Detail & Related papers (2023-06-11T02:08:59Z) - Faster Last-iterate Convergence of Policy Optimization in Zero-Sum
Markov Games [63.60117916422867]
This paper focuses on the most basic setting of competitive multi-agent RL, namely two-player zero-sum Markov games.
We propose a single-loop policy optimization method with symmetric updates from both agents, where the policy is updated via the entropy-regularized optimistic multiplicative weights update (OMWU) method.
Our convergence results improve upon the best known complexities, and lead to a better understanding of policy optimization in competitive Markov games.
arXiv Detail & Related papers (2022-10-03T16:05:43Z) - Over-The-Air Federated Learning under Byzantine Attacks [43.67333971183711]
Federated learning (FL) is a promising solution to enable many AI applications.
FL allows the clients to participate in the training phase, governed by a central server, without sharing their local data.
One of the main challenges of FL is the communication overhead.
We propose a transmission and aggregation framework to reduce the effect of such attacks.
arXiv Detail & Related papers (2022-05-05T22:09:21Z) - Robust Federated Learning with Connectivity Failures: A
Semi-Decentralized Framework with Collaborative Relaying [27.120495678791883]
Intermittent client connectivity is one of the major challenges in centralized federated edge learning frameworks.
We propose a collaborative relaying based semi-decentralized federated edge learning framework.
arXiv Detail & Related papers (2022-02-24T01:06:42Z) - FedChain: Chained Algorithms for Near-Optimal Communication Cost in
Federated Learning [24.812767482563878]
Federated learning (FL) aims to minimize the communication complexity of training a model over heterogeneous data distributed across many clients.
We propose FedChain, an algorithmic framework that combines the strengths of local methods and global methods to achieve fast convergence in terms of R.
arXiv Detail & Related papers (2021-08-16T02:57:06Z) - Faster Non-Convex Federated Learning via Global and Local Momentum [57.52663209739171]
textttFedGLOMO is the first (first-order) FLtexttFedGLOMO algorithm.
Our algorithm is provably optimal even with communication between the clients and the server.
arXiv Detail & Related papers (2020-12-07T21:05:31Z) - Mime: Mimicking Centralized Stochastic Algorithms in Federated Learning [102.26119328920547]
Federated learning (FL) is a challenging setting for optimization due to the heterogeneity of the data across different clients.
We propose a general algorithmic framework, Mime, which mitigates client drift and adapts arbitrary centralized optimization algorithms.
arXiv Detail & Related papers (2020-08-08T21:55:07Z) - Multi-Armed Bandit Based Client Scheduling for Federated Learning [91.91224642616882]
federated learning (FL) features ubiquitous properties such as reduction of communication overhead and preserving data privacy.
In each communication round of FL, the clients update local models based on their own data and upload their local updates via wireless channels.
This work provides a multi-armed bandit-based framework for online client scheduling (CS) in FL without knowing wireless channel state information and statistical characteristics of clients.
arXiv Detail & Related papers (2020-07-05T12:32:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.