Decentralized Federated Averaging
- URL: http://arxiv.org/abs/2104.11375v1
- Date: Fri, 23 Apr 2021 02:01:30 GMT
- Title: Decentralized Federated Averaging
- Authors: Tao Sun, Dongsheng Li, Bao Wang
- Abstract summary: Federated averaging (FedAvg) is a communication efficient algorithm for the distributed training with an enormous number of clients.
We study the decentralized FedAvg with momentum (DFedAvgM), which is implemented on clients that are connected by an undirected graph.
- Score: 17.63112147669365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated averaging (FedAvg) is a communication efficient algorithm for the
distributed training with an enormous number of clients. In FedAvg, clients
keep their data locally for privacy protection; a central parameter server is
used to communicate between clients. This central server distributes the
parameters to each client and collects the updated parameters from clients.
FedAvg is mostly studied in centralized fashions, which requires massive
communication between server and clients in each communication. Moreover,
attacking the central server can break the whole system's privacy. In this
paper, we study the decentralized FedAvg with momentum (DFedAvgM), which is
implemented on clients that are connected by an undirected graph. In DFedAvgM,
all clients perform stochastic gradient descent with momentum and communicate
with their neighbors only. To further reduce the communication cost, we also
consider the quantized DFedAvgM. We prove convergence of the (quantized)
DFedAvgM under trivial assumptions; the convergence rate can be improved when
the loss function satisfies the P{\L} property. Finally, we numerically verify
the efficacy of DFedAvgM.
Related papers
- FedAR: Addressing Client Unavailability in Federated Learning with Local Update Approximation and Rectification [8.747592727421596]
Federated learning (FL) enables clients to collaboratively train machine learning models under the coordination of a server.
FedAR can get all clients involved in the global model update to achieve a high-quality global model on the server.
FedAR also depicts impressive performance in the presence of a large number of clients with severe client unavailability.
arXiv Detail & Related papers (2024-07-26T21:56:52Z) - Communication-Efficient Federated Knowledge Graph Embedding with Entity-Wise Top-K Sparsification [49.66272783945571]
Federated Knowledge Graphs Embedding learning (FKGE) encounters challenges in communication efficiency stemming from the considerable size of parameters and extensive communication rounds.
We propose bidirectional communication-efficient FedS based on Entity-Wise Top-K Sparsification strategy.
arXiv Detail & Related papers (2024-06-19T05:26:02Z) - Efficient Cross-Domain Federated Learning by MixStyle Approximation [0.3277163122167433]
We introduce a privacy-preserving, resource-efficient Federated Learning concept for client adaptation in hardware-constrained environments.
Our approach includes server model pre-training on source data and subsequent fine-tuning on target data via low-end clients.
Preliminary results indicate that our method reduces computational and transmission costs while maintaining competitive performance on downstream tasks.
arXiv Detail & Related papers (2023-12-12T08:33:34Z) - A Multi-Token Coordinate Descent Method for Semi-Decentralized Vertical
Federated Learning [24.60603310894048]
Communication efficiency is a major challenge in learning (FL)
We propose Multi-Token Coordinate Descent (MTCD)
MTCD is a tunable communication-efficient for semi-decentralized vertical federation setups.
arXiv Detail & Related papers (2023-09-18T17:59:01Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Timely Asynchronous Hierarchical Federated Learning: Age of Convergence [59.96266198512243]
We consider an asynchronous hierarchical federated learning setting with a client-edge-cloud framework.
The clients exchange the trained parameters with their corresponding edge servers, which update the locally aggregated model.
The goal of each client is to converge to the global model, while maintaining timeliness of the clients.
arXiv Detail & Related papers (2023-06-21T17:39:16Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z) - Robust Federated Learning with Connectivity Failures: A
Semi-Decentralized Framework with Collaborative Relaying [27.120495678791883]
Intermittent client connectivity is one of the major challenges in centralized federated edge learning frameworks.
We propose a collaborative relaying based semi-decentralized federated edge learning framework.
arXiv Detail & Related papers (2022-02-24T01:06:42Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - DeFed: A Principled Decentralized and Privacy-Preserving Federated
Learning Algorithm [10.487593244018933]
Federated learning enables a large number of clients to participate in learning a shared model while maintaining the training data stored in each client.
Here we propose a principled decentralized federated learning algorithm (DeFed), which removes the central client in the classical Federated Averaging (FedAvg) setting.
The proposed algorithm is proven to reach the global minimum with a convergence rate of $O(1/T)$ when the loss function is smooth and strongly convex, where $T$ is the number of iterations in gradient descent.
arXiv Detail & Related papers (2021-07-15T07:39:19Z) - Timely Communication in Federated Learning [65.1253801733098]
We consider a global learning framework in which a parameter server (PS) trains a global model by using $n$ clients without actually storing the client data centrally at a cloud server.
Under the proposed scheme, at each iteration, the PS waits for $m$ available clients and sends them the current model.
We find the average age of information experienced by each client and numerically characterize the age-optimal $m$ and $k$ values for a given $n$.
arXiv Detail & Related papers (2020-12-31T18:52:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.