Hierarchical Over-the-Air Federated Edge Learning
- URL: http://arxiv.org/abs/2112.11167v1
- Date: Tue, 21 Dec 2021 13:02:10 GMT
- Title: Hierarchical Over-the-Air Federated Edge Learning
- Authors: Ozan Ayg\"un, Mohammad Kazemi, Deniz G\"und\"uz, Tolga M. Duman
- Abstract summary: We propose hierarchical over-the-air federated learning (HOTAFL) to form clusters near mobile users (MUs)
We demonstrate through theoretical and experimental results that local aggregation in each cluster before global aggregation leads to a better performance and faster convergence than OTA FL.
- Score: 26.553214197361626
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) over wireless communication channels, specifically,
over-the-air (OTA) model aggregation framework is considered. In OTA wireless
setups, the adverse channel effects can be alleviated by increasing the number
of receive antennas at the parameter server (PS), which performs model
aggregation. However, the performance of OTA FL is limited by the presence of
mobile users (MUs) located far away from the PS. In this paper, to mitigate
this limitation, we propose hierarchical over-the-air federated learning
(HOTAFL), which utilizes intermediary servers (IS) to form clusters near MUs.
We provide a convergence analysis for the proposed setup, and demonstrate
through theoretical and experimental results that local aggregation in each
cluster before global aggregation leads to a better performance and faster
convergence than OTA FL.
Related papers
- UAV-assisted Unbiased Hierarchical Federated Learning: Performance and Convergence Analysis [16.963596661873954]
Hierarchical federated learning (HFL) is a key paradigm to distribute learning across edge devices to reach global intelligence.
In HFL, each edge device trains a local model using its respective data and transmits the updated model parameters to an edge server for local aggregation.
This paper proposes an unbiased HFL algorithm for unmanned aerial vehicle (UAV)-assisted wireless networks.
arXiv Detail & Related papers (2024-07-05T06:23:01Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Hierarchical Personalized Federated Learning Over Massive Mobile Edge
Computing Networks [95.39148209543175]
We propose hierarchical PFL (HPFL), an algorithm for deploying PFL over massive MEC networks.
HPFL combines the objectives of training loss minimization and round latency minimization while jointly determining the optimal bandwidth allocation.
arXiv Detail & Related papers (2023-03-19T06:00:05Z) - Over-The-Air Clustered Wireless Federated Learning [2.2530496464901106]
Over-the-air (OTA) FL is preferred since the clients can transmit parameter updates simultaneously to a server.
In the absence of a powerful server, decentralised strategy is employed where clients communicate with their neighbors to obtain a consensus ML model.
We propose the OTA semi-decentralised clustered wireless FL (CWFL) and CWFL-Prox algorithms, which is communication efficient as compared to the decentralised FL strategy.
arXiv Detail & Related papers (2022-11-07T08:34:35Z) - Over-the-Air Federated Edge Learning with Hierarchical Clustering [21.51594138166343]
In over-the-air (OTA) aggregation, mobile users (MUs) aim to reach a consensus on a global model with the help of a parameter server (PS)
In OTA FL, MUs train their models using local data at every training round and transmit their gradients simultaneously using the same frequency band in an uncoded fashion.
While the OTA FL has a significantly decreased communication cost, it is susceptible to adverse channel effects and noise.
We propose a wireless-based hierarchical FL scheme that uses intermediate servers (ISs) to form clusters at the areas where the MUs are more densely located
arXiv Detail & Related papers (2022-07-19T12:42:12Z) - Over-the-Air Federated Learning with Joint Adaptive Computation and
Power Control [30.7130850260223]
Over-the-air computation learning (OTA-FL) is considered in this paper.
OTA-FL exploits the superposition property of the wireless medium and performs aggregation over the air for free.
arXiv Detail & Related papers (2022-05-12T03:28:03Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z) - Federated Learning in the Sky: Joint Power Allocation and Scheduling
with UAV Swarms [98.78553146823829]
Unmanned aerial vehicle (UAV) swarms must exploit machine learning (ML) in order to execute various tasks.
In this paper, a novel framework is proposed to implement distributed learning (FL) algorithms within a UAV swarm.
arXiv Detail & Related papers (2020-02-19T14:04:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.