Hierarchical Personalized Federated Learning Over Massive Mobile Edge
Computing Networks
- URL: http://arxiv.org/abs/2303.10580v1
- Date: Sun, 19 Mar 2023 06:00:05 GMT
- Title: Hierarchical Personalized Federated Learning Over Massive Mobile Edge
Computing Networks
- Authors: Chaoqun You, Kun Guo, Howard H. Yang, Tony Q. S. Quek
- Abstract summary: We propose hierarchical PFL (HPFL), an algorithm for deploying PFL over massive MEC networks.
HPFL combines the objectives of training loss minimization and round latency minimization while jointly determining the optimal bandwidth allocation.
- Score: 95.39148209543175
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personalized Federated Learning (PFL) is a new Federated Learning (FL)
paradigm, particularly tackling the heterogeneity issues brought by various
mobile user equipments (UEs) in mobile edge computing (MEC) networks. However,
due to the ever-increasing number of UEs and the complicated administrative
work it brings, it is desirable to switch the PFL algorithm from its
conventional two-layer framework to a multiple-layer one. In this paper, we
propose hierarchical PFL (HPFL), an algorithm for deploying PFL over massive
MEC networks. The UEs in HPFL are divided into multiple clusters, and the UEs
in each cluster forward their local updates to the edge server (ES)
synchronously for edge model aggregation, while the ESs forward their edge
models to the cloud server semi-asynchronously for global model aggregation.
The above training manner leads to a tradeoff between the training loss in each
round and the round latency. HPFL combines the objectives of training loss
minimization and round latency minimization while jointly determining the
optimal bandwidth allocation as well as the ES scheduling policy in the
hierarchical learning framework. Extensive experiments verify that HPFL not
only guarantees convergence in hierarchical aggregation frameworks but also has
advantages in round training loss maximization and round latency minimization.
Related papers
- Robust Model Aggregation for Heterogeneous Federated Learning: Analysis and Optimizations [35.58487905412915]
We propose a time-driven SFL (T-SFL) framework for heterogeneous systems.
To evaluate the learning performance of T-SFL, we provide an upper bound on the global loss function.
We develop a discriminative model selection algorithm that removes local models from clients whose number of iterations falls below a predetermined threshold.
arXiv Detail & Related papers (2024-05-11T11:55:26Z) - AdaptSFL: Adaptive Split Federated Learning in Resource-constrained Edge Networks [15.195798715517315]
Split federated learning (SFL) is a promising solution by of floading the primary training workload to a server via model partitioning.
We propose AdaptSFL, a novel resource-adaptive SFL framework, to expedite SFL under resource-constrained edge computing systems.
arXiv Detail & Related papers (2024-03-19T19:05:24Z) - Client Orchestration and Cost-Efficient Joint Optimization for
NOMA-Enabled Hierarchical Federated Learning [55.49099125128281]
We propose a non-orthogonal multiple access (NOMA) enabled HFL system under semi-synchronous cloud model aggregation.
We show that the proposed scheme outperforms the considered benchmarks regarding HFL performance improvement and total cost reduction.
arXiv Detail & Related papers (2023-11-03T13:34:44Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Delay-Aware Hierarchical Federated Learning [7.292078085289465]
The paper introduces delay-aware hierarchical federated learning (DFL) to improve the efficiency of distributed machine learning (ML) model training.
During global synchronization, the cloud server consolidates local models with an outdated global model using a convex control algorithm.
Numerical evaluations show DFL's superior performance in terms of faster global model, reduced convergence resource, and evaluations against communication delays.
arXiv Detail & Related papers (2023-03-22T09:23:29Z) - Semi-Synchronous Personalized Federated Learning over Mobile Edge
Networks [88.50555581186799]
We propose a semi-synchronous PFL algorithm, termed as Semi-Synchronous Personalized FederatedAveraging (PerFedS$2$), over mobile edge networks.
We derive an upper bound of the convergence rate of PerFedS2 in terms of the number of participants per global round and the number of rounds.
Experimental results verify the effectiveness of PerFedS2 in saving training time as well as guaranteeing the convergence of training loss.
arXiv Detail & Related papers (2022-09-27T02:12:43Z) - Achieving Personalized Federated Learning with Sparse Local Models [75.76854544460981]
Federated learning (FL) is vulnerable to heterogeneously distributed data.
To counter this issue, personalized FL (PFL) was proposed to produce dedicated local models for each individual user.
Existing PFL solutions either demonstrate unsatisfactory generalization towards different model architectures or cost enormous extra computation and memory.
We proposeFedSpa, a novel PFL scheme that employs personalized sparse masks to customize sparse local models on the edge.
arXiv Detail & Related papers (2022-01-27T08:43:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.