Asynchronous Decentralized Learning over Unreliable Wireless Networks
- URL: http://arxiv.org/abs/2202.00955v1
- Date: Wed, 2 Feb 2022 11:00:49 GMT
- Title: Asynchronous Decentralized Learning over Unreliable Wireless Networks
- Authors: Eunjeong Jeong, Matteo Zecchin, Marios Kountouris
- Abstract summary: Decentralized learning enables edge users to collaboratively train models by exchanging information via device-to-device communication.
We propose an asynchronous decentralized gradient descent (DSGD) algorithm, which is robust to the inherent and communication failures occurring at the wireless network edge.
Experimental results corroborate our analysis, demonstrating the benefits of asynchronicity and outdated gradient information reuse in decentralized learning over unreliable wireless networks.
- Score: 4.630093015127539
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Decentralized learning enables edge users to collaboratively train models by
exchanging information via device-to-device communication, yet prior works have
been limited to wireless networks with fixed topologies and reliable workers.
In this work, we propose an asynchronous decentralized stochastic gradient
descent (DSGD) algorithm, which is robust to the inherent computation and
communication failures occurring at the wireless network edge. We theoretically
analyze its performance and establish a non-asymptotic convergence guarantee.
Experimental results corroborate our analysis, demonstrating the benefits of
asynchronicity and outdated gradient information reuse in decentralized
learning over unreliable wireless networks.
Related papers
- DRACO: Decentralized Asynchronous Federated Learning over Continuous Row-Stochastic Network Matrices [7.389425875982468]
We propose DRACO, a novel method for decentralized asynchronous Descent (SGD) over row-stochastic gossip wireless networks.
Our approach enables edge devices within decentralized networks to perform local training and model exchanging along a continuous timeline.
Our numerical experiments corroborate the efficacy of the proposed technique.
arXiv Detail & Related papers (2024-06-19T13:17:28Z) - Decentralized Learning over Wireless Networks with Broadcast-Based
Subgraph Sampling [36.99249604183772]
This work centers on the communication aspects of decentralized learning over wireless networks, using consensus-based decentralized descent (D-SGD)
Considering the actual communication cost or delay caused by in-network information exchange in an iterative process, our goal is to achieve fast convergence of the algorithm measured by improvement per transmission slot.
We propose BASS, an efficient communication framework for D-SGD over wireless networks with broadcast transmission and probabilistic subgraph sampling.
arXiv Detail & Related papers (2023-10-24T18:15:52Z) - Decentralized Learning over Wireless Networks: The Effect of Broadcast
with Random Access [56.91063444859008]
We investigate the impact of broadcast transmission and probabilistic random access policy on the convergence performance of D-SGD.
Our results demonstrate that optimizing the access probability to maximize the expected number of successful links is a highly effective strategy for accelerating the system convergence.
arXiv Detail & Related papers (2023-05-12T10:32:26Z) - Communication-Efficient Distributionally Robust Decentralized Learning [23.612400109629544]
Decentralized learning algorithms empower interconnected edge devices to share data and computational resources.
We propose a single decentralized loop descent/ascent algorithm (ADGDA) to solve the underlying minimax optimization problem.
arXiv Detail & Related papers (2022-05-31T09:00:37Z) - UAV-Aided Decentralized Learning over Mesh Networks [23.612400109629544]
Decentralized learning empowers wireless network devices to collaboratively train a machine learning (ML) model relying solely on device-to-device (D2D) communication.
Local connectivity of real world mesh networks, due to the limited communication range of its wireless nodes, undermines the efficiency of decentralized learning protocols.
We propose an optimized UAV trajectory, that is defined as a sequence of waypoints that the UAV visits sequentially in order to transfer intelligence across sparsely connected group of users.
arXiv Detail & Related papers (2022-03-02T10:39:40Z) - Finite-Time Consensus Learning for Decentralized Optimization with
Nonlinear Gossiping [77.53019031244908]
We present a novel decentralized learning framework based on nonlinear gossiping (NGO), that enjoys an appealing finite-time consensus property to achieve better synchronization.
Our analysis on how communication delay and randomized chats affect learning further enables the derivation of practical variants.
arXiv Detail & Related papers (2021-11-04T15:36:25Z) - Distributed Learning in Wireless Networks: Recent Progress and Future
Challenges [170.35951727508225]
Next-generation wireless networks will enable many machine learning (ML) tools and applications to analyze various types of data collected by edge devices.
Distributed learning and inference techniques have been proposed as a means to enable edge devices to collaboratively train ML models without raw data exchanges.
This paper provides a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks.
arXiv Detail & Related papers (2021-04-05T20:57:56Z) - Federated Learning over Wireless Device-to-Device Networks: Algorithms
and Convergence Analysis [46.76179091774633]
This paper studies federated learning (FL) over wireless device-to-device (D2D) networks.
First, we introduce generic digital and analog wireless implementations of communication-efficient DSGD algorithms.
Second, under the assumptions of convexity and connectivity, we provide convergence bounds for both implementations.
arXiv Detail & Related papers (2021-01-29T17:42:26Z) - Asynchronous Decentralized Learning of a Neural Network [49.15799302636519]
We exploit an asynchronous computing framework namely ARock to learn a deep neural network called self-size estimating feedforward neural network (SSFN) in a decentralized scenario.
Asynchronous decentralized SSFN relaxes the communication bottleneck by allowing one node activation and one side communication, which reduces the communication overhead significantly.
We compare asynchronous dSSFN with traditional synchronous dSSFN in the experimental results, which shows the competitive performance of asynchronous dSSFN, especially when the communication network is sparse.
arXiv Detail & Related papers (2020-04-10T15:53:37Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.