Delay-Aware Hierarchical Federated Learning
- URL: http://arxiv.org/abs/2303.12414v4
- Date: Thu, 28 Sep 2023 03:57:58 GMT
- Title: Delay-Aware Hierarchical Federated Learning
- Authors: Frank Po-Chen Lin, Seyyedali Hosseinalipour, Nicol\`o Michelusi,
Christopher Brinton
- Abstract summary: The paper introduces delay-aware hierarchical federated learning (DFL) to improve the efficiency of distributed machine learning (ML) model training.
During global synchronization, the cloud server consolidates local models with an outdated global model using a convex control algorithm.
Numerical evaluations show DFL's superior performance in terms of faster global model, reduced convergence resource, and evaluations against communication delays.
- Score: 7.292078085289465
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning has gained popularity as a means of training models
distributed across the wireless edge. The paper introduces delay-aware
hierarchical federated learning (DFL) to improve the efficiency of distributed
machine learning (ML) model training by accounting for communication delays
between edge and cloud. Different from traditional federated learning, DFL
leverages multiple stochastic gradient descent iterations on local datasets
within each global aggregation period and intermittently aggregates model
parameters through edge servers in local subnetworks. During global
synchronization, the cloud server consolidates local models with the outdated
global model using a local-global combiner, thus preserving crucial elements of
both, enhancing learning efficiency under the presence of delay. A set of
conditions is obtained to achieve the sub-linear convergence rate of O(1/k) for
strongly convex and smooth loss functions. Based on these findings, an adaptive
control algorithm is developed for DFL, implementing policies to mitigate
energy consumption and communication latency while aiming for sublinear
convergence. Numerical evaluations show DFL's superior performance in terms of
faster global model convergence, reduced resource consumption, and robustness
against communication delays compared to existing FL algorithms. In summary,
this proposed method offers improved efficiency and results when dealing with
both convex and non-convex loss functions.
Related papers
- Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - Time Minimization in Hierarchical Federated Learning [11.678121177730718]
Federated learning is a modern decentralized machine learning technique where user equipments perform machine learning tasks locally and then upload the model parameters to a central server.
In this paper, we consider a 3-layer hierarchical federated learning system which involves model parameter exchanges between the cloud and edge servers.
arXiv Detail & Related papers (2022-10-07T13:53:20Z) - Resource-Efficient and Delay-Aware Federated Learning Design under Edge
Heterogeneity [10.702853653891902]
Federated learning (FL) has emerged as a popular methodology for distributing machine learning across wireless edge devices.
In this work, we consider optimizing the tradeoff between model performance and resource utilization in FL.
Our proposed StoFedDelAv incorporates a localglobal model combiner into the FL computation step.
arXiv Detail & Related papers (2021-12-27T22:30:15Z) - Stragglers Are Not Disaster: A Hybrid Federated Learning Algorithm with
Delayed Gradients [21.63719641718363]
Federated learning (FL) is a new machine learning framework which trains a joint model across a large amount of decentralized computing devices.
This paper presents a novel FL algorithm, namely Hybrid Federated Learning (HFL), to achieve a learning balance in efficiency and effectiveness.
arXiv Detail & Related papers (2021-02-12T02:27:44Z) - Federated Learning with Communication Delay in Edge Networks [5.500965885412937]
Federated learning has received significant attention as a potential solution for distributing machine learning (ML) model training through edge networks.
This work addresses an important consideration of federated learning at the network edge: communication delays between the edge nodes and the aggregator.
A technique called FedDelAvg (federated delayed averaging) is developed, which generalizes the standard federated averaging algorithm to incorporate a weighting between the current local model and the delayed global model received at each device during the synchronization step.
arXiv Detail & Related papers (2020-08-21T06:21:35Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.