Semi-Synchronous Personalized Federated Learning over Mobile Edge
Networks
- URL: http://arxiv.org/abs/2209.13115v1
- Date: Tue, 27 Sep 2022 02:12:43 GMT
- Title: Semi-Synchronous Personalized Federated Learning over Mobile Edge
Networks
- Authors: Chaoqun You, Daquan Feng, Kun Guo, Howard H. Yang, Tony Q. S. Quek
- Abstract summary: We propose a semi-synchronous PFL algorithm, termed as Semi-Synchronous Personalized FederatedAveraging (PerFedS$2$), over mobile edge networks.
We derive an upper bound of the convergence rate of PerFedS2 in terms of the number of participants per global round and the number of rounds.
Experimental results verify the effectiveness of PerFedS2 in saving training time as well as guaranteeing the convergence of training loss.
- Score: 88.50555581186799
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personalized Federated Learning (PFL) is a new Federated Learning (FL)
approach to address the heterogeneity issue of the datasets generated by
distributed user equipments (UEs). However, most existing PFL implementations
rely on synchronous training to ensure good convergence performances, which may
lead to a serious straggler problem, where the training time is heavily
prolonged by the slowest UE. To address this issue, we propose a
semi-synchronous PFL algorithm, termed as Semi-Synchronous Personalized
FederatedAveraging (PerFedS$^2$), over mobile edge networks. By jointly
optimizing the wireless bandwidth allocation and UE scheduling policy, it not
only mitigates the straggler problem but also provides convergent training loss
guarantees. We derive an upper bound of the convergence rate of PerFedS2 in
terms of the number of participants per global round and the number of rounds.
On this basis, the bandwidth allocation problem can be solved using analytical
solutions and the UE scheduling policy can be obtained by a greedy algorithm.
Experimental results verify the effectiveness of PerFedS2 in saving training
time as well as guaranteeing the convergence of training loss, in contrast to
synchronous and asynchronous PFL algorithms.
Related papers
- AdaptSFL: Adaptive Split Federated Learning in Resource-constrained Edge Networks [15.195798715517315]
Split federated learning (SFL) is a promising solution by of floading the primary training workload to a server via model partitioning.
We propose AdaptSFL, a novel resource-adaptive SFL framework, to expedite SFL under resource-constrained edge computing systems.
arXiv Detail & Related papers (2024-03-19T19:05:24Z) - Client Orchestration and Cost-Efficient Joint Optimization for
NOMA-Enabled Hierarchical Federated Learning [55.49099125128281]
We propose a non-orthogonal multiple access (NOMA) enabled HFL system under semi-synchronous cloud model aggregation.
We show that the proposed scheme outperforms the considered benchmarks regarding HFL performance improvement and total cost reduction.
arXiv Detail & Related papers (2023-11-03T13:34:44Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Training Latency Minimization for Model-Splitting Allowed Federated Edge
Learning [16.8717239856441]
We propose a model-splitting allowed FL (SFL) framework to alleviate the shortage of computing power faced by clients in training deep neural networks (DNNs) using federated learning (FL)
Under the synchronized global update setting, the latency to complete a round of global training is determined by the maximum latency for the clients to complete a local training session.
To solve this mixed integer nonlinear programming problem, we first propose a regression method to fit the quantitative-relationship between the cut-layer and other parameters of an AI-model, and thus, transform the TLMP into a continuous problem.
arXiv Detail & Related papers (2023-07-21T12:26:42Z) - Hierarchical Personalized Federated Learning Over Massive Mobile Edge
Computing Networks [95.39148209543175]
We propose hierarchical PFL (HPFL), an algorithm for deploying PFL over massive MEC networks.
HPFL combines the objectives of training loss minimization and round latency minimization while jointly determining the optimal bandwidth allocation.
arXiv Detail & Related papers (2023-03-19T06:00:05Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Time-triggered Federated Learning over Wireless Networks [48.389824560183776]
We present a time-triggered FL algorithm (TT-Fed) over wireless networks.
Our proposed TT-Fed algorithm improves the converged test accuracy by up to 12.5% and 5%, respectively.
arXiv Detail & Related papers (2022-04-26T16:37:29Z) - Stragglers Are Not Disaster: A Hybrid Federated Learning Algorithm with
Delayed Gradients [21.63719641718363]
Federated learning (FL) is a new machine learning framework which trains a joint model across a large amount of decentralized computing devices.
This paper presents a novel FL algorithm, namely Hybrid Federated Learning (HFL), to achieve a learning balance in efficiency and effectiveness.
arXiv Detail & Related papers (2021-02-12T02:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.