Federated Learning over Wireless Device-to-Device Networks: Algorithms
and Convergence Analysis
- URL: http://arxiv.org/abs/2101.12704v1
- Date: Fri, 29 Jan 2021 17:42:26 GMT
- Title: Federated Learning over Wireless Device-to-Device Networks: Algorithms
and Convergence Analysis
- Authors: Hong Xing and Osvaldo Simeone and Suzhi Bi
- Abstract summary: This paper studies federated learning (FL) over wireless device-to-device (D2D) networks.
First, we introduce generic digital and analog wireless implementations of communication-efficient DSGD algorithms.
Second, under the assumptions of convexity and connectivity, we provide convergence bounds for both implementations.
- Score: 46.76179091774633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The proliferation of Internet-of-Things (IoT) devices and cloud-computing
applications over siloed data centers is motivating renewed interest in the
collaborative training of a shared model by multiple individual clients via
federated learning (FL). To improve the communication efficiency of FL
implementations in wireless systems, recent works have proposed compression and
dimension reduction mechanisms, along with digital and analog transmission
schemes that account for channel noise, fading, and interference. This prior
art has mainly focused on star topologies consisting of distributed clients and
a central server. In contrast, this paper studies FL over wireless
device-to-device (D2D) networks by providing theoretical insights into the
performance of digital and analog implementations of decentralized stochastic
gradient descent (DSGD). First, we introduce generic digital and analog
wireless implementations of communication-efficient DSGD algorithms, leveraging
random linear coding (RLC) for compression and over-the-air computation
(AirComp) for simultaneous analog transmissions. Next, under the assumptions of
convexity and connectivity, we provide convergence bounds for both
implementations. The results demonstrate the dependence of the optimality gap
on the connectivity and on the signal-to-noise ratio (SNR) levels in the
network. The analysis is corroborated by experiments on an image-classification
task.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Digital versus Analog Transmissions for Federated Learning over Wireless
Networks [91.20926827568053]
We compare two effective communication schemes for wireless federated learning (FL) over resource-constrained networks.
We first examine both digital and analog transmission methods, together with a unified and fair comparison scheme under practical constraints.
A universal convergence analysis under various imperfections is established for FL performance evaluation in wireless networks.
arXiv Detail & Related papers (2024-02-15T01:50:46Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Performance Analysis for Resource Constrained Decentralized Federated
Learning Over Wireless Networks [4.76281731053599]
Decentralized federated learning (DFL) can lead to significant communication overhead and reliance on a central server.
This study analyzes the performance of resource-constrained DFL using different communication schemes (digital and analog) over wireless networks to optimize communication efficiency.
arXiv Detail & Related papers (2023-08-12T07:56:48Z) - Asynchronous Decentralized Learning over Unreliable Wireless Networks [4.630093015127539]
Decentralized learning enables edge users to collaboratively train models by exchanging information via device-to-device communication.
We propose an asynchronous decentralized gradient descent (DSGD) algorithm, which is robust to the inherent and communication failures occurring at the wireless network edge.
Experimental results corroborate our analysis, demonstrating the benefits of asynchronicity and outdated gradient information reuse in decentralized learning over unreliable wireless networks.
arXiv Detail & Related papers (2022-02-02T11:00:49Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Over-the-Air Decentralized Federated Learning [28.593149477080605]
We consider decentralized federated learning (FL) over wireless networks, where over-the-air computation (AirComp) is adopted to facilitate the local model consensus in a device-to-device (D2D) communication manner.
We propose an AirComp-based DSGD with gradient tracking and variance reduction (DSGT-VR) algorithm, where both precoding and decoding strategies are developed for D2D communication.
We prove that the proposed algorithm converges linearly and establish the optimality gap for strongly convex and smooth loss functions, taking into account the channel fading and noise.
arXiv Detail & Related papers (2021-06-15T09:42:33Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.