Robust Federated Learning with Connectivity Failures: A
Semi-Decentralized Framework with Collaborative Relaying
- URL: http://arxiv.org/abs/2202.11850v1
- Date: Thu, 24 Feb 2022 01:06:42 GMT
- Title: Robust Federated Learning with Connectivity Failures: A
Semi-Decentralized Framework with Collaborative Relaying
- Authors: Michal Yemini, Rajarshi Saha, Emre Ozfatura, Deniz G\"und\"uz, Andrea
J. Goldsmith
- Abstract summary: Intermittent client connectivity is one of the major challenges in centralized federated edge learning frameworks.
We propose a collaborative relaying based semi-decentralized federated edge learning framework.
- Score: 27.120495678791883
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intermittent client connectivity is one of the major challenges in
centralized federated edge learning frameworks. Intermittently failing uplinks
to the central parameter server (PS) can induce a large generalization gap in
performance especially when the data distribution among the clients exhibits
heterogeneity. In this work, to mitigate communication blockages between
clients and the central PS, we introduce the concept of knowledge relaying
wherein the successfully participating clients collaborate in relaying their
neighbors' local updates to a central parameter server (PS) in order to boost
the participation of clients with intermittently failing connectivity. We
propose a collaborative relaying based semi-decentralized federated edge
learning framework where at every communication round each client first
computes a local consensus of the updates from its neighboring clients and
eventually transmits a weighted average of its own update and those of its
neighbors to the PS. We appropriately optimize these averaging weights to
reduce the variance of the global update at the PS while ensuring that the
global update is unbiased, consequently improving the convergence rate.
Finally, by conducting experiments on CIFAR-10 dataset we validate our
theoretical results and demonstrate that our proposed scheme is superior to
Federated averaging benchmark especially when data distribution among clients
is non-iid.
Related papers
- Boosting the Performance of Decentralized Federated Learning via Catalyst Acceleration [66.43954501171292]
We introduce Catalyst Acceleration and propose an acceleration Decentralized Federated Learning algorithm called DFedCata.
DFedCata consists of two main components: the Moreau envelope function, which addresses parameter inconsistencies, and Nesterov's extrapolation step, which accelerates the aggregation phase.
Empirically, we demonstrate the advantages of the proposed algorithm in both convergence speed and generalization performance on CIFAR10/100 with various non-iid data distributions.
arXiv Detail & Related papers (2024-10-09T06:17:16Z) - Communication-Efficient Federated Knowledge Graph Embedding with Entity-Wise Top-K Sparsification [49.66272783945571]
Federated Knowledge Graphs Embedding learning (FKGE) encounters challenges in communication efficiency stemming from the considerable size of parameters and extensive communication rounds.
We propose bidirectional communication-efficient FedS based on Entity-Wise Top-K Sparsification strategy.
arXiv Detail & Related papers (2024-06-19T05:26:02Z) - Achieving Linear Speedup in Asynchronous Federated Learning with
Heterogeneous Clients [30.135431295658343]
Federated learning (FL) aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients.
In this paper, we propose an efficient federated learning (AFL) framework called DeFedAvg.
DeFedAvg is the first AFL algorithm that achieves the desirable linear speedup property, which indicates its high scalability.
arXiv Detail & Related papers (2024-02-17T05:22:46Z) - Timely Asynchronous Hierarchical Federated Learning: Age of Convergence [59.96266198512243]
We consider an asynchronous hierarchical federated learning setting with a client-edge-cloud framework.
The clients exchange the trained parameters with their corresponding edge servers, which update the locally aggregated model.
The goal of each client is to converge to the global model, while maintaining timeliness of the clients.
arXiv Detail & Related papers (2023-06-21T17:39:16Z) - Collaborative Mean Estimation over Intermittently Connected Networks
with Peer-To-Peer Privacy [86.61829236732744]
This work considers the problem of Distributed Mean Estimation (DME) over networks with intermittent connectivity.
The goal is to learn a global statistic over the data samples localized across distributed nodes with the help of a central server.
We study the tradeoff between collaborative relaying and privacy leakage due to the additional data sharing among nodes.
arXiv Detail & Related papers (2023-02-28T19:17:03Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z) - Semi-Decentralized Federated Learning with Collaborative Relaying [27.120495678791883]
We present a semi-decentralized federated learning algorithm wherein clients collaborate by relaying their neighbors' local updates to a central parameter server (PS)
We appropriately optimize these averaging weights to ensure that the global update at the PS is unbiased and to reduce the variance of the global update at the PS.
arXiv Detail & Related papers (2022-05-23T02:16:53Z) - Over-The-Air Federated Learning under Byzantine Attacks [43.67333971183711]
Federated learning (FL) is a promising solution to enable many AI applications.
FL allows the clients to participate in the training phase, governed by a central server, without sharing their local data.
One of the main challenges of FL is the communication overhead.
We propose a transmission and aggregation framework to reduce the effect of such attacks.
arXiv Detail & Related papers (2022-05-05T22:09:21Z) - Communication-Efficient Federated Learning with Accelerated Client Gradient [46.81082897703729]
Federated learning often suffers from slow and unstable convergence due to the heterogeneous characteristics of participating client datasets.
We propose a simple but effective federated learning framework, which improves the consistency across clients and facilitates the convergence of the server model.
We provide the theoretical convergence rate of our algorithm and demonstrate remarkable performance gains in terms of accuracy and communication efficiency.
arXiv Detail & Related papers (2022-01-10T05:31:07Z) - Decentralized Federated Averaging [17.63112147669365]
Federated averaging (FedAvg) is a communication efficient algorithm for the distributed training with an enormous number of clients.
We study the decentralized FedAvg with momentum (DFedAvgM), which is implemented on clients that are connected by an undirected graph.
arXiv Detail & Related papers (2021-04-23T02:01:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.