Efficient Parallel Split Learning over Resource-constrained Wireless
Edge Networks
- URL: http://arxiv.org/abs/2303.15991v4
- Date: Wed, 24 Jan 2024 06:03:32 GMT
- Title: Efficient Parallel Split Learning over Resource-constrained Wireless
Edge Networks
- Authors: Zheng Lin, Guangyu Zhu, Yiqin Deng, Xianhao Chen, Yue Gao, Kaibin
Huang, Yuguang Fang
- Abstract summary: In this paper, we advocate the integration of edge computing paradigm and parallel split learning (PSL)
We propose an innovative PSL framework, namely, efficient parallel split learning (EPSL) to accelerate model training.
We show that the proposed EPSL framework significantly decreases the training latency needed to achieve a target accuracy.
- Score: 44.37047471448793
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasingly deeper neural networks hinder the democratization of
privacy-enhancing distributed learning, such as federated learning (FL), to
resource-constrained devices. To overcome this challenge, in this paper, we
advocate the integration of edge computing paradigm and parallel split learning
(PSL), allowing multiple client devices to offload substantial training
workloads to an edge server via layer-wise model split. By observing that
existing PSL schemes incur excessive training latency and large volume of data
transmissions, we propose an innovative PSL framework, namely, efficient
parallel split learning (EPSL), to accelerate model training. To be specific,
EPSL parallelizes client-side model training and reduces the dimension of local
gradients for back propagation (BP) via last-layer gradient aggregation,
leading to a significant reduction in server-side training and communication
latency. Moreover, by considering the heterogeneous channel conditions and
computing capabilities at client devices, we jointly optimize subchannel
allocation, power control, and cut layer selection to minimize the per-round
latency. Simulation results show that the proposed EPSL framework significantly
decreases the training latency needed to achieve a target accuracy compared
with the state-of-the-art benchmarks, and the tailored resource management and
layer split strategy can considerably reduce latency than the counterpart
without optimization.
Related papers
- Split Federated Learning Over Heterogeneous Edge Devices: Algorithm and Optimization [7.013344179232109]
Split Learning (SL) is a promising collaborative machine learning approach, enabling resource-constrained devices to train models without sharing raw data.
Current SL algorithms face limitations in training efficiency and suffer from prolonged latency.
We propose the Heterogeneous Split Federated Learning framework, which allows resource-constrained clients to train their personalized client-side models in parallel.
arXiv Detail & Related papers (2024-11-21T07:46:01Z) - AdaptSFL: Adaptive Split Federated Learning in Resource-constrained Edge Networks [15.195798715517315]
Split federated learning (SFL) is a promising solution by of floading the primary training workload to a server via model partitioning.
We propose AdaptSFL, a novel resource-adaptive SFL framework, to expedite SFL under resource-constrained edge computing systems.
arXiv Detail & Related papers (2024-03-19T19:05:24Z) - Effectively Heterogeneous Federated Learning: A Pairing and Split
Learning Based Approach [16.093068118849246]
This paper presents a novel split federated learning (SFL) framework that pairs clients with different computational resources.
A greedy algorithm is proposed by reconstructing the optimization of training latency as a graph edge selection problem.
Simulation results show the proposed method can significantly improve the FL training speed and achieve high performance.
arXiv Detail & Related papers (2023-08-26T11:10:54Z) - Training Latency Minimization for Model-Splitting Allowed Federated Edge
Learning [16.8717239856441]
We propose a model-splitting allowed FL (SFL) framework to alleviate the shortage of computing power faced by clients in training deep neural networks (DNNs) using federated learning (FL)
Under the synchronized global update setting, the latency to complete a round of global training is determined by the maximum latency for the clients to complete a local training session.
To solve this mixed integer nonlinear programming problem, we first propose a regression method to fit the quantitative-relationship between the cut-layer and other parameters of an AI-model, and thus, transform the TLMP into a continuous problem.
arXiv Detail & Related papers (2023-07-21T12:26:42Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Hierarchical Personalized Federated Learning Over Massive Mobile Edge
Computing Networks [95.39148209543175]
We propose hierarchical PFL (HPFL), an algorithm for deploying PFL over massive MEC networks.
HPFL combines the objectives of training loss minimization and round latency minimization while jointly determining the optimal bandwidth allocation.
arXiv Detail & Related papers (2023-03-19T06:00:05Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Adaptive Subcarrier, Parameter, and Power Allocation for Partitioned
Edge Learning Over Broadband Channels [69.18343801164741]
partitioned edge learning (PARTEL) implements parameter-server training, a well known distributed learning method, in wireless network.
We consider the case of deep neural network (DNN) models which can be trained using PARTEL by introducing some auxiliary variables.
arXiv Detail & Related papers (2020-10-08T15:27:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.