Adaptive Federated Pruning in Hierarchical Wireless Networks
- URL: http://arxiv.org/abs/2305.09042v1
- Date: Mon, 15 May 2023 22:04:49 GMT
- Title: Adaptive Federated Pruning in Hierarchical Wireless Networks
- Authors: Xiaonan Liu and Shiqiang Wang and Yansha Deng and Arumugam Nallanathan
- Abstract summary: Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
- Score: 69.6417645730093
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a promising privacy-preserving distributed
learning framework where a server aggregates models updated by multiple devices
without accessing their private datasets. Hierarchical FL (HFL), as a
device-edge-cloud aggregation hierarchy, can enjoy both the cloud server's
access to more datasets and the edge servers' efficient communications with
devices. However, the learning latency increases with the HFL network scale due
to the increasing number of edge servers and devices with limited local
computation capability and communication bandwidth. To address this issue, in
this paper, we introduce model pruning for HFL in wireless networks to reduce
the neural network scale. We present the convergence analysis of an upper on
the l2 norm of gradients for HFL with model pruning, analyze the computation
and communication latency of the proposed model pruning scheme, and formulate
an optimization problem to maximize the convergence rate under a given latency
threshold by jointly optimizing the pruning ratio and wireless resource
allocation. By decoupling the optimization problem and using Karush Kuhn Tucker
(KKT) conditions, closed-form solutions of pruning ratio and wireless resource
allocation are derived. Simulation results show that our proposed HFL with
model pruning achieves similar learning accuracy compared with the HFL without
model pruning and reduces about 50 percent communication cost.
Related papers
- Joint Model Pruning and Resource Allocation for Wireless Time-triggered Federated Learning [31.628735588144096]
Time-triggered federated learning organizes users into tiers based on fixed time intervals.
We apply model pruning to wireless Time-triggered systems and jointly study the problem of optimizing the pruning ratio and bandwidth allocation.
Our proposed TT-Prune demonstrates a 40% reduction in communication cost, compared with the asynchronous multi-tier FL without model pruning.
arXiv Detail & Related papers (2024-08-03T12:19:23Z) - Client Orchestration and Cost-Efficient Joint Optimization for
NOMA-Enabled Hierarchical Federated Learning [55.49099125128281]
We propose a non-orthogonal multiple access (NOMA) enabled HFL system under semi-synchronous cloud model aggregation.
We show that the proposed scheme outperforms the considered benchmarks regarding HFL performance improvement and total cost reduction.
arXiv Detail & Related papers (2023-11-03T13:34:44Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Hierarchical Federated Learning in Wireless Networks: Pruning Tackles Bandwidth Scarcity and System Heterogeneity [32.321021292376315]
We propose a pruning-enabled hierarchical federated learning (PHFL) in heterogeneous networks (HetNets)
We first derive an upper bound of the convergence rate that clearly demonstrates the impact of the model pruning and wireless communications.
We validate the effectiveness of our proposed PHFL algorithm in terms of test accuracy, wall clock time, energy consumption and bandwidth requirement.
arXiv Detail & Related papers (2023-08-03T07:03:33Z) - Efficient Parallel Split Learning over Resource-constrained Wireless
Edge Networks [44.37047471448793]
In this paper, we advocate the integration of edge computing paradigm and parallel split learning (PSL)
We propose an innovative PSL framework, namely, efficient parallel split learning (EPSL) to accelerate model training.
We show that the proposed EPSL framework significantly decreases the training latency needed to achieve a target accuracy.
arXiv Detail & Related papers (2023-03-26T16:09:48Z) - Delay-Aware Hierarchical Federated Learning [7.292078085289465]
The paper introduces delay-aware hierarchical federated learning (DFL) to improve the efficiency of distributed machine learning (ML) model training.
During global synchronization, the cloud server consolidates local models with an outdated global model using a convex control algorithm.
Numerical evaluations show DFL's superior performance in terms of faster global model, reduced convergence resource, and evaluations against communication delays.
arXiv Detail & Related papers (2023-03-22T09:23:29Z) - Hierarchical Personalized Federated Learning Over Massive Mobile Edge
Computing Networks [95.39148209543175]
We propose hierarchical PFL (HPFL), an algorithm for deploying PFL over massive MEC networks.
HPFL combines the objectives of training loss minimization and round latency minimization while jointly determining the optimal bandwidth allocation.
arXiv Detail & Related papers (2023-03-19T06:00:05Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Bayesian Federated Learning over Wireless Networks [87.37301441859925]
Federated learning is a privacy-preserving and distributed training method using heterogeneous data sets stored at local devices.
This paper presents an efficient modified BFL algorithm called scalableBFL (SBFL)
arXiv Detail & Related papers (2020-12-31T07:32:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.