Over-the-Air Decentralized Federated Learning
- URL: http://arxiv.org/abs/2106.08011v1
- Date: Tue, 15 Jun 2021 09:42:33 GMT
- Title: Over-the-Air Decentralized Federated Learning
- Authors: Yandong Shi, Yong Zhou, and Yuanming Shi
- Abstract summary: We consider decentralized federated learning (FL) over wireless networks, where over-the-air computation (AirComp) is adopted to facilitate the local model consensus in a device-to-device (D2D) communication manner.
We propose an AirComp-based DSGD with gradient tracking and variance reduction (DSGT-VR) algorithm, where both precoding and decoding strategies are developed for D2D communication.
We prove that the proposed algorithm converges linearly and establish the optimality gap for strongly convex and smooth loss functions, taking into account the channel fading and noise.
- Score: 28.593149477080605
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we consider decentralized federated learning (FL) over
wireless networks, where over-the-air computation (AirComp) is adopted to
facilitate the local model consensus in a device-to-device (D2D) communication
manner. However, the AirComp-based consensus phase brings the additive noise in
each algorithm iterate and the consensus needs to be robust to wireless network
topology changes, which introduce a coupled and novel challenge of establishing
the convergence for wireless decentralized FL algorithm. To facilitate
consensus phase, we propose an AirComp-based DSGD with gradient tracking and
variance reduction (DSGT-VR) algorithm, where both precoding and decoding
strategies are developed for D2D communication. Furthermore, we prove that the
proposed algorithm converges linearly and establish the optimality gap for
strongly convex and smooth loss functions, taking into account the channel
fading and noise. The theoretical result shows that the additional error bound
in the optimality gap depends on the number of devices. Extensive simulations
verify the theoretical results and show that the proposed algorithm outperforms
other benchmark decentralized FL algorithms over wireless networks.
Related papers
- Digital versus Analog Transmissions for Federated Learning over Wireless
Networks [91.20926827568053]
We compare two effective communication schemes for wireless federated learning (FL) over resource-constrained networks.
We first examine both digital and analog transmission methods, together with a unified and fair comparison scheme under practical constraints.
A universal convergence analysis under various imperfections is established for FL performance evaluation in wireless networks.
arXiv Detail & Related papers (2024-02-15T01:50:46Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Over-the-Air Federated Learning via Second-Order Optimization [37.594140209854906]
Federated learning (FL) could result in task-oriented data traffic flows over wireless networks with limited radio resources.
We propose a novel over-the-air second-order federated optimization algorithm to simultaneously reduce the communication rounds and enable low-latency global model aggregation.
arXiv Detail & Related papers (2022-03-29T12:39:23Z) - Communication-Efficient Stochastic Zeroth-Order Optimization for
Federated Learning [28.65635956111857]
Federated learning (FL) enables edge devices to collaboratively train a global model without sharing their private data.
To enhance the training efficiency of FL, various algorithms have been proposed, ranging from first-order computation to first-order methods.
arXiv Detail & Related papers (2022-01-24T08:56:06Z) - Federated Learning over Wireless Device-to-Device Networks: Algorithms
and Convergence Analysis [46.76179091774633]
This paper studies federated learning (FL) over wireless device-to-device (D2D) networks.
First, we introduce generic digital and analog wireless implementations of communication-efficient DSGD algorithms.
Second, under the assumptions of convexity and connectivity, we provide convergence bounds for both implementations.
arXiv Detail & Related papers (2021-01-29T17:42:26Z) - Fast Convergence Algorithm for Analog Federated Learning [30.399830943617772]
We propose an AirComp-based FedSplit algorithm for efficient analog federated learning over wireless channels.
We prove that the proposed algorithm linearly converges to the optimal solutions under the assumption that the objective function is strongly convex and smooth.
Our algorithm is theoretically and experimentally verified to be much more robust to the ill-conditioned problems with faster convergence compared with other benchmark FL algorithms.
arXiv Detail & Related papers (2020-10-30T10:59:49Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.