Time-sensitive Learning for Heterogeneous Federated Edge Intelligence
- URL: http://arxiv.org/abs/2301.10977v1
- Date: Thu, 26 Jan 2023 08:13:22 GMT
- Title: Time-sensitive Learning for Heterogeneous Federated Edge Intelligence
- Authors: Yong Xiao, Xiaohan Zhang, Guangming Shi, Marwan Krunz, Diep N. Nguyen,
Dinh Thai Hoang
- Abstract summary: We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
- Score: 52.83633954857744
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-time machine learning has recently attracted significant interest due to
its potential to support instantaneous learning, adaptation, and decision
making in a wide range of application domains, including self-driving vehicles,
intelligent transportation, and industry automation. We investigate real-time
ML in a federated edge intelligence (FEI) system, an edge computing system that
implements federated learning (FL) solutions based on data samples collected
and uploaded from decentralized data networks. FEI systems often exhibit
heterogenous communication and computational resource distribution, as well as
non-i.i.d. data samples, resulting in long model training time and inefficient
resource utilization. Motivated by this fact, we propose a time-sensitive
federated learning (TS-FL) framework to minimize the overall run-time for
collaboratively training a shared ML model. Training acceleration solutions for
both TS-FL with synchronous coordination (TS-FL-SC) and asynchronous
coordination (TS-FL-ASC) are investigated. To address straggler effect in
TS-FL-SC, we develop an analytical solution to characterize the impact of
selecting different subsets of edge servers on the overall model training time.
A server dropping-based solution is proposed to allow slow-performance edge
servers to be removed from participating in model training if their impact on
the resulting model accuracy is limited. A joint optimization algorithm is
proposed to minimize the overall time consumption of model training by
selecting participating edge servers, local epoch number. We develop an
analytical expression to characterize the impact of staleness effect of
asynchronous coordination and straggler effect of FL on the time consumption of
TS-FL-ASC. Experimental results show that TS-FL-SC and TS-FL-ASC can provide up
to 63% and 28% of reduction, in the overall model training time, respectively.
Related papers
- AdaptSFL: Adaptive Split Federated Learning in Resource-constrained Edge Networks [15.195798715517315]
Split federated learning (SFL) is a promising solution by of floading the primary training workload to a server via model partitioning.
We propose AdaptSFL, a novel resource-adaptive SFL framework, to expedite SFL under resource-constrained edge computing systems.
arXiv Detail & Related papers (2024-03-19T19:05:24Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Effectively Heterogeneous Federated Learning: A Pairing and Split
Learning Based Approach [16.093068118849246]
This paper presents a novel split federated learning (SFL) framework that pairs clients with different computational resources.
A greedy algorithm is proposed by reconstructing the optimization of training latency as a graph edge selection problem.
Simulation results show the proposed method can significantly improve the FL training speed and achieve high performance.
arXiv Detail & Related papers (2023-08-26T11:10:54Z) - SemiSFL: Split Federated Learning on Unlabeled and Non-IID Data [34.49090830845118]
Federated Learning (FL) has emerged to allow multiple clients to collaboratively train machine learning models on their private data at the network edge.
We propose a novel Semi-supervised SFL system, termed SemiSFL, which incorporates clustering regularization to perform SFL with unlabeled and non-IID client data.
Our system provides a 3.8x speed-up in training time, reduces the communication cost by about 70.3% while reaching the target accuracy, and achieves up to 5.8% improvement in accuracy under non-IID scenarios.
arXiv Detail & Related papers (2023-07-29T02:35:37Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Scheduling and Aggregation Design for Asynchronous Federated Learning
over Wireless Networks [56.91063444859008]
Federated Learning (FL) is a collaborative machine learning framework that combines on-device training and server-based aggregation.
We propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems.
We show that an age-aware'' aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
arXiv Detail & Related papers (2022-12-14T17:33:01Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Spatio-Temporal Federated Learning for Massive Wireless Edge Networks [23.389249751372393]
An edge server and numerous mobile devices (clients) jointly learn a global model without transporting huge amount of data collected by the mobile devices to the edge server.
The proposed FL approach exploits spatial and temporal correlations between learning updates from different mobile devices scheduled to join STFL in various trainings.
An analytical framework of STFL is proposed and employed to study the learning capability of STFL via its convergence performance.
arXiv Detail & Related papers (2021-10-27T16:46:45Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.