Mobility Accelerates Learning: Convergence Analysis on Hierarchical
Federated Learning in Vehicular Networks
- URL: http://arxiv.org/abs/2401.09656v1
- Date: Thu, 18 Jan 2024 00:09:54 GMT
- Title: Mobility Accelerates Learning: Convergence Analysis on Hierarchical
Federated Learning in Vehicular Networks
- Authors: Tan Chen, Jintao Yan, Yuxuan Sun, Sheng Zhou, Deniz G\"und\"uz,
Zhisheng Niu
- Abstract summary: We show that mobility influences the convergence speed by both fusing the edge data and shuffling the edge models.
Mobility increases the model accuracy of HFL by up to 15.1% when training a convolutional neural network.
- Score: 15.282996586821415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hierarchical federated learning (HFL) enables distributed training of models
across multiple devices with the help of several edge servers and a cloud edge
server in a privacy-preserving manner. In this paper, we consider HFL with
highly mobile devices, mainly targeting at vehicular networks. Through
convergence analysis, we show that mobility influences the convergence speed by
both fusing the edge data and shuffling the edge models. While mobility is
usually considered as a challenge from the perspective of communication, we
prove that it increases the convergence speed of HFL with edge-level
heterogeneous data, since more diverse data can be incorporated. Furthermore,
we demonstrate that a higher speed leads to faster convergence, since it
accelerates the fusion of data. Simulation results show that mobility increases
the model accuracy of HFL by up to 15.1% when training a convolutional neural
network on the CIFAR-10 dataset.
Related papers
- Efficient Asynchronous Federated Learning with Sparsification and
Quantization [55.6801207905772]
Federated Learning (FL) is attracting more and more attention to collaboratively train a machine learning model without transferring raw data.
FL generally exploits a parameter server and a large number of edge devices during the whole process of the model training.
We propose TEASQ-Fed to exploit edge devices to asynchronously participate in the training process by actively applying for tasks.
arXiv Detail & Related papers (2023-12-23T07:47:07Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Data-Heterogeneous Hierarchical Federated Learning with Mobility [20.482704508355905]
Federated learning enables distributed training of machine learning (ML) models across multiple devices.
We consider a data-heterogeneous HFL scenario with mobility, mainly targeting vehicular networks.
We show that mobility can indeed improve the model accuracy by up to 15.1% when training a convolutional neural network.
arXiv Detail & Related papers (2023-06-19T04:22:18Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Spatio-Temporal Federated Learning for Massive Wireless Edge Networks [23.389249751372393]
An edge server and numerous mobile devices (clients) jointly learn a global model without transporting huge amount of data collected by the mobile devices to the edge server.
The proposed FL approach exploits spatial and temporal correlations between learning updates from different mobile devices scheduled to join STFL in various trainings.
An analytical framework of STFL is proposed and employed to study the learning capability of STFL via its convergence performance.
arXiv Detail & Related papers (2021-10-27T16:46:45Z) - Mobility-Aware Cluster Federated Learning in Hierarchical Wireless
Networks [81.83990083088345]
We develop a theoretical model to characterize the hierarchical federated learning (HFL) algorithm in wireless networks.
Our analysis proves that the learning performance of HFL deteriorates drastically with highly-mobile users.
To circumvent these issues, we propose a mobility-aware cluster federated learning (MACFL) algorithm.
arXiv Detail & Related papers (2021-08-20T10:46:58Z) - Accelerating Federated Learning over Reliability-Agnostic Clients in
Mobile Edge Computing Systems [15.923599062148135]
Federated learning has emerged as a promising privacy-preserving approach to facilitating AI applications.
It remains a big challenge to optimize the efficiency and effectiveness of FL when it is integrated with the MEC architecture.
In this paper, a multi-layer federated learning protocol called HybridFL is designed for the MEC architecture.
arXiv Detail & Related papers (2020-07-28T17:35:39Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.