On the Convergence of Decentralized Federated Learning Under Imperfect
Information Sharing
- URL: http://arxiv.org/abs/2303.10695v1
- Date: Sun, 19 Mar 2023 15:58:04 GMT
- Title: On the Convergence of Decentralized Federated Learning Under Imperfect
Information Sharing
- Authors: Vishnu Pandi Chellapandi, Antesh Upadhyay, Abolfazl Hashemi, and
Stanislaw H /.Zak
- Abstract summary: This paper presents three different algorithms of Decentralized Federated Learning (DFL) in the presence of imperfect information sharing modeled as noisy communication channels.
Results demonstrate that under imperfect information sharing, the third scheme that mixes gradients is more robust in the presence of a noisy channel.
- Score: 6.48064541861595
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decentralized learning and optimization is a central problem in control that
encompasses several existing and emerging applications, such as federated
learning. While there exists a vast literature on this topic and most methods
centered around the celebrated average-consensus paradigm, less attention has
been devoted to scenarios where the communication between the agents may be
imperfect. To this end, this paper presents three different algorithms of
Decentralized Federated Learning (DFL) in the presence of imperfect information
sharing modeled as noisy communication channels. The first algorithm, Federated
Noisy Decentralized Learning (FedNDL1), comes from the literature, where the
noise is added to their parameters to simulate the scenario of the presence of
noisy communication channels. This algorithm shares parameters to form a
consensus with the clients based on a communication graph topology through a
noisy communication channel. The proposed second algorithm (FedNDL2) is similar
to the first algorithm but with added noise to the parameters, and it performs
the gossip averaging before the gradient optimization. The proposed third
algorithm
(FedNDL3), on the other hand, shares the gradients through noisy
communication channels instead of the parameters. Theoretical and experimental
results demonstrate that under imperfect information sharing, the third scheme
that mixes gradients is more robust in the presence of a noisy channel compared
with the algorithms from the literature that mix the parameters.
Related papers
- Boosting the Performance of Decentralized Federated Learning via Catalyst Acceleration [66.43954501171292]
We introduce Catalyst Acceleration and propose an acceleration Decentralized Federated Learning algorithm called DFedCata.
DFedCata consists of two main components: the Moreau envelope function, which addresses parameter inconsistencies, and Nesterov's extrapolation step, which accelerates the aggregation phase.
Empirically, we demonstrate the advantages of the proposed algorithm in both convergence speed and generalization performance on CIFAR10/100 with various non-iid data distributions.
arXiv Detail & Related papers (2024-10-09T06:17:16Z) - Disentangled Noisy Correspondence Learning [56.06801962154915]
Cross-modal retrieval is crucial in understanding latent correspondences across modalities.
DisNCL is a novel information-theoretic framework for feature Disentanglement in Noisy Correspondence Learning.
arXiv Detail & Related papers (2024-08-10T09:49:55Z) - FedNMUT -- Federated Noisy Model Update Tracking Convergence Analysis [3.665841843512992]
A novel Decentralized Noisy Model Update Tracking Federated Learning algorithm (FedNMUT) is proposed.
It is tailored to function efficiently in the presence noisy communication channels.
FedNMUT incorporates noise into its parameters to mimic the conditions of noisy communication channels.
arXiv Detail & Related papers (2024-03-20T02:17:47Z) - Over-the-air Federated Policy Gradient [3.977656739530722]
Over-the-air aggregation has been widely considered in large-scale distributed learning, optimization, and sensing.
We propose the over-the-air federated policy algorithm, where all agents simultaneously broadcast an analog signal carrying local information to a common wireless channel.
arXiv Detail & Related papers (2023-10-25T12:28:20Z) - Communication-Efficient Decentralized Federated Learning via One-Bit
Compressive Sensing [52.402550431781805]
Decentralized federated learning (DFL) has gained popularity due to its practicality across various applications.
Compared to the centralized version, training a shared model among a large number of nodes in DFL is more challenging.
We develop a novel algorithm based on the framework of the inexact alternating direction method (iADM)
arXiv Detail & Related papers (2023-08-31T12:22:40Z) - DESTRESS: Computation-Optimal and Communication-Efficient Decentralized
Nonconvex Finite-Sum Optimization [43.31016937305845]
Internet-of-things, networked sensing, autonomous systems and federated learning call for decentralized algorithms for finite-sum optimizations.
We develop DEcentralized STochastic REcurSive methodDESTRESS for non finite-sum optimization.
Detailed theoretical and numerical comparisons show that DESTRESS improves upon prior decentralized algorithms.
arXiv Detail & Related papers (2021-10-04T03:17:41Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - A Linearly Convergent Algorithm for Decentralized Optimization: Sending
Less Bits for Free! [72.31332210635524]
Decentralized optimization methods enable on-device training of machine learning models without a central coordinator.
We propose a new randomized first-order method which tackles the communication bottleneck by applying randomized compression operators.
We prove that our method can solve the problems without any increase in the number of communications compared to the baseline.
arXiv Detail & Related papers (2020-11-03T13:35:53Z) - Decentralized Deep Learning using Momentum-Accelerated Consensus [15.333413663982874]
We consider the problem of decentralized deep learning where multiple agents collaborate to learn from a distributed dataset.
We propose and analyze a novel decentralized deep learning algorithm where the agents interact over a fixed communication topology.
Our algorithm is based on the heavy-ball acceleration method used in gradient-based protocol.
arXiv Detail & Related papers (2020-10-21T17:39:52Z) - Quantized Decentralized Stochastic Learning over Directed Graphs [52.94011236627326]
We consider a decentralized learning problem where data points are distributed among computing nodes communicating over a directed graph.
As the model size gets large, decentralized learning faces a major bottleneck that is the communication load due to each node transmitting messages (model updates) to its neighbors.
We propose the quantized decentralized learning algorithm over directed graphs that is based on the push-sum algorithm in decentralized consensus optimization.
arXiv Detail & Related papers (2020-02-23T18:25:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.