Mobility-Aware Cluster Federated Learning in Hierarchical Wireless
Networks
- URL: http://arxiv.org/abs/2108.09103v1
- Date: Fri, 20 Aug 2021 10:46:58 GMT
- Title: Mobility-Aware Cluster Federated Learning in Hierarchical Wireless
Networks
- Authors: Chenyuan Feng, Howard H. Yang, Deshun Hu, Zhiwei Zhao, Tony Q. S.
Quek, and Geyong Min
- Abstract summary: We develop a theoretical model to characterize the hierarchical federated learning (HFL) algorithm in wireless networks.
Our analysis proves that the learning performance of HFL deteriorates drastically with highly-mobile users.
To circumvent these issues, we propose a mobility-aware cluster federated learning (MACFL) algorithm.
- Score: 81.83990083088345
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Implementing federated learning (FL) algorithms in wireless networks has
garnered a wide range of attention. However, few works have considered the
impact of user mobility on the learning performance. To fill this research gap,
firstly, we develop a theoretical model to characterize the hierarchical
federated learning (HFL) algorithm in wireless networks where the mobile users
may roam across multiple edge access points, leading to incompletion of
inconsistent FL training. Secondly, we provide the convergence analysis of HFL
with user mobility. Our analysis proves that the learning performance of HFL
deteriorates drastically with highly-mobile users. And this decline in the
learning performance will be exacerbated with small number of participants and
large data distribution divergences among local data of users. To circumvent
these issues, we propose a mobility-aware cluster federated learning (MACFL)
algorithm by redesigning the access mechanism, local update rule and model
aggregation scheme. Finally, we provide experiments to evaluate the learning
performance of HFL and our MACFL. The results show that our MACFL can enhance
the learning performance, especially for three different cases, namely, the
case of users with non-independent and identical distribution data, the case of
users with high mobility, and the cases with a small number of users.
Related papers
- Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning? [50.03434441234569]
Federated Learning (FL) has gained significant popularity due to its effectiveness in training machine learning models across diverse sites without requiring direct data sharing.
While various algorithms have shown that FL with local updates is a communication-efficient distributed learning framework, the generalization performance of FL with local updates has received comparatively less attention.
arXiv Detail & Related papers (2024-09-05T19:00:18Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Data-Heterogeneous Hierarchical Federated Learning with Mobility [20.482704508355905]
Federated learning enables distributed training of machine learning (ML) models across multiple devices.
We consider a data-heterogeneous HFL scenario with mobility, mainly targeting vehicular networks.
We show that mobility can indeed improve the model accuracy by up to 15.1% when training a convolutional neural network.
arXiv Detail & Related papers (2023-06-19T04:22:18Z) - FLCC: Efficient Distributed Federated Learning on IoMT over CSMA/CA [0.0]
Federated Learning (FL) has emerged as a promising approach for privacy preservation.
This article investigates the performance of FL on an application that might be used to improve a remote healthcare system over ad hoc networks.
We present two metrics to evaluate the network performance: 1) probability of successful transmission while minimizing the interference, and 2) performance of distributed FL model in terms of accuracy and loss.
arXiv Detail & Related papers (2023-03-29T16:36:42Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - On-the-fly Resource-Aware Model Aggregation for Federated Learning in
Heterogeneous Edge [15.932747809197517]
Edge computing has revolutionized the world of mobile and wireless networks world thanks to its flexible, secure, and performing characteristics.
In this paper, we conduct an in-depth study of strategies to replace a central aggregation server with a flying master.
Our results demonstrate a significant reduction of runtime using our flying master FL framework compared to the original FL from measurements results conducted in our EdgeAI testbed and over real 5G networks.
arXiv Detail & Related papers (2021-12-21T19:04:42Z) - Spatio-Temporal Federated Learning for Massive Wireless Edge Networks [23.389249751372393]
An edge server and numerous mobile devices (clients) jointly learn a global model without transporting huge amount of data collected by the mobile devices to the edge server.
The proposed FL approach exploits spatial and temporal correlations between learning updates from different mobile devices scheduled to join STFL in various trainings.
An analytical framework of STFL is proposed and employed to study the learning capability of STFL via its convergence performance.
arXiv Detail & Related papers (2021-10-27T16:46:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.