Semi-Asynchronous Federated Edge Learning Mechanism via Over-the-air
Computation
- URL: http://arxiv.org/abs/2305.04066v3
- Date: Tue, 30 May 2023 03:48:21 GMT
- Title: Semi-Asynchronous Federated Edge Learning Mechanism via Over-the-air
Computation
- Authors: Zhoubin Kou, Yun Ji, Xiaoxiong Zhong, Sheng Zhang
- Abstract summary: We propose a semi-asynchronous aggregation FEEL mechanism with AirComp scheme (PAOTA) to improve the training efficiency of the FEEL system.
Our proposed algorithm achieves convergence performance close to that of the ideal Local SGD.
- Score: 4.598679151181452
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over-the-air Computation (AirComp) has been demonstrated as an effective
transmission scheme to boost the efficiency of federated edge learning (FEEL).
However, existing FEEL systems with AirComp scheme often employ traditional
synchronous aggregation mechanisms for local model aggregation in each global
round, which suffer from the stragglers issues. In this paper, we propose a
semi-asynchronous aggregation FEEL mechanism with AirComp scheme (PAOTA) to
improve the training efficiency of the FEEL system in the case of significant
heterogeneity in data and devices. Taking the staleness and divergence of model
updates from edge devices into consideration, we minimize the convergence upper
bound of the FEEL global model by adjusting the uplink transmit power of edge
devices at each aggregation period. The simulation results demonstrate that our
proposed algorithm achieves convergence performance close to that of the ideal
Local SGD. Furthermore, with the same target accuracy, the training time
required for PAOTA is less than that of the ideal Local SGD and the synchronous
FEEL algorithm via AirComp.
Related papers
- Heterogeneity-Aware Cooperative Federated Edge Learning with Adaptive Computation and Communication Compression [7.643645513353701]
Motivated by the drawbacks of cloud-based federated learning (FL), cooperative federated edge learning (CFEL) has been proposed to improve efficiency for FL over mobile edge networks.
CFEL faces critical challenges arising from dynamic and heterogeneous device properties, which slow down the convergence and increase resource consumption.
This paper proposes a heterogeneity-aware CFEL scheme called textitHeterogeneity-Aware Cooperative Edge-based Federated Averaging (HCEF) that aims to maximize the model accuracy while minimizing the training time and energy consumption.
arXiv Detail & Related papers (2024-09-06T04:26:57Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - AEDFL: Efficient Asynchronous Decentralized Federated Learning with
Heterogeneous Devices [61.66943750584406]
We propose an Asynchronous Efficient Decentralized FL framework, i.e., AEDFL, in heterogeneous environments.
First, we propose an asynchronous FL system model with an efficient model aggregation method for improving the FL convergence.
Second, we propose a dynamic staleness-aware model update approach to achieve superior accuracy.
Third, we propose an adaptive sparse training method to reduce communication and computation costs without significant accuracy degradation.
arXiv Detail & Related papers (2023-12-18T05:18:17Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Blind Asynchronous Over-the-Air Federated Edge Learning [15.105440618101147]
Federated Edge Learning (FEEL) is a distributed machine learning technique.
We propose a novel synchronization-free method to recover the parameters of the global model over the air.
Our proposed algorithm is close to the ideal synchronized scenario by $10%$, and performs $4times$ better than the simple case.
arXiv Detail & Related papers (2022-10-31T16:54:14Z) - Resource-Efficient and Delay-Aware Federated Learning Design under Edge
Heterogeneity [10.702853653891902]
Federated learning (FL) has emerged as a popular methodology for distributing machine learning across wireless edge devices.
In this work, we consider optimizing the tradeoff between model performance and resource utilization in FL.
Our proposed StoFedDelAv incorporates a localglobal model combiner into the FL computation step.
arXiv Detail & Related papers (2021-12-27T22:30:15Z) - Device Scheduling and Update Aggregation Policies for Asynchronous
Federated Learning [72.78668894576515]
Federated Learning (FL) is a newly emerged decentralized machine learning (ML) framework.
We propose an asynchronous FL framework with periodic aggregation to eliminate the straggler issue in FL systems.
arXiv Detail & Related papers (2021-07-23T18:57:08Z) - Edge Federated Learning Via Unit-Modulus Over-The-Air Computation
(Extended Version) [64.76619508293966]
This paper proposes a unit-modulus over-the-air computation (UM-AirComp) framework to facilitate efficient edge federated learning.
It uploads simultaneously local model parameters and updates global model parameters via analog beamforming.
We demonstrate the implementation of UM-AirComp in a vehicle-to-everything autonomous driving simulation platform.
arXiv Detail & Related papers (2021-01-28T15:10:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.