Decentralized Learning over Wireless Networks: The Effect of Broadcast
with Random Access
- URL: http://arxiv.org/abs/2305.07368v2
- Date: Fri, 7 Jul 2023 11:32:21 GMT
- Title: Decentralized Learning over Wireless Networks: The Effect of Broadcast
with Random Access
- Authors: Zheng Chen, Martin Dahl, and Erik G. Larsson
- Abstract summary: We investigate the impact of broadcast transmission and probabilistic random access policy on the convergence performance of D-SGD.
Our results demonstrate that optimizing the access probability to maximize the expected number of successful links is a highly effective strategy for accelerating the system convergence.
- Score: 56.91063444859008
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we focus on the communication aspect of decentralized learning,
which involves multiple agents training a shared machine learning model using
decentralized stochastic gradient descent (D-SGD) over distributed data. In
particular, we investigate the impact of broadcast transmission and
probabilistic random access policy on the convergence performance of D-SGD,
considering the broadcast nature of wireless channels and the link dynamics in
the communication topology. Our results demonstrate that optimizing the access
probability to maximize the expected number of successful links is a highly
effective strategy for accelerating the system convergence.
Related papers
- DRACO: Decentralized Asynchronous Federated Learning over Continuous Row-Stochastic Network Matrices [7.389425875982468]
We propose DRACO, a novel method for decentralized asynchronous Descent (SGD) over row-stochastic gossip wireless networks.
Our approach enables edge devices within decentralized networks to perform local training and model exchanging along a continuous timeline.
Our numerical experiments corroborate the efficacy of the proposed technique.
arXiv Detail & Related papers (2024-06-19T13:17:28Z) - Distributed Event-Based Learning via ADMM [11.461617927469316]
We consider a distributed learning problem, where agents minimize a global objective function by exchanging information over a network.
Our approach has two distinct features: (i) It substantially reduces communication by triggering communication only when necessary, and (ii) it is agnostic to the data-distribution among the different agents.
arXiv Detail & Related papers (2024-05-17T08:30:28Z) - Decentralized Learning over Wireless Networks with Broadcast-Based
Subgraph Sampling [36.99249604183772]
This work centers on the communication aspects of decentralized learning over wireless networks, using consensus-based decentralized descent (D-SGD)
Considering the actual communication cost or delay caused by in-network information exchange in an iterative process, our goal is to achieve fast convergence of the algorithm measured by improvement per transmission slot.
We propose BASS, an efficient communication framework for D-SGD over wireless networks with broadcast transmission and probabilistic subgraph sampling.
arXiv Detail & Related papers (2023-10-24T18:15:52Z) - Decentralized Channel Management in WLANs with Graph Neural Networks [17.464353263281907]
Wireless local area networks (WLANs) manage multiple access points (APs) and assign radio frequency to APs for satisfying traffic demands.
This paper puts forth a learning-based solution that can be implemented in a decentralized manner.
arXiv Detail & Related papers (2022-10-30T21:14:45Z) - Asynchronous Decentralized Learning over Unreliable Wireless Networks [4.630093015127539]
Decentralized learning enables edge users to collaboratively train models by exchanging information via device-to-device communication.
We propose an asynchronous decentralized gradient descent (DSGD) algorithm, which is robust to the inherent and communication failures occurring at the wireless network edge.
Experimental results corroborate our analysis, demonstrating the benefits of asynchronicity and outdated gradient information reuse in decentralized learning over unreliable wireless networks.
arXiv Detail & Related papers (2022-02-02T11:00:49Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Distributed Conditional Generative Adversarial Networks (GANs) for
Data-Driven Millimeter Wave Communications in UAV Networks [116.94802388688653]
A novel framework is proposed to perform data-driven air-to-ground (A2G) channel estimation for millimeter wave (mmWave) communications in an unmanned aerial vehicle (UAV) wireless network.
An effective channel estimation approach is developed, allowing each UAV to train a stand-alone channel model via a conditional generative adversarial network (CGAN) along each beamforming direction.
A cooperative framework, based on a distributed CGAN architecture, is developed, allowing each UAV to collaboratively learn the mmWave channel distribution.
arXiv Detail & Related papers (2021-02-02T20:56:46Z) - FedRec: Federated Learning of Universal Receivers over Fading Channels [92.15358738530037]
We propose a neural network-based symbol detection technique for downlink fading channels.
Multiple users collaborate to jointly learn a universal data-driven detector, hence the name FedRec.
The performance of the resulting receiver is shown to approach the MAP performance in diverse channel conditions without requiring knowledge of the fading statistics.
arXiv Detail & Related papers (2020-11-14T11:29:55Z) - Communication-Efficient and Distributed Learning Over Wireless Networks:
Principles and Applications [55.65768284748698]
Machine learning (ML) is a promising enabler for the fifth generation (5G) communication systems and beyond.
This article aims to provide a holistic overview of relevant communication and ML principles, and thereby present communication-efficient and distributed learning frameworks with selected use cases.
arXiv Detail & Related papers (2020-08-06T12:37:14Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.