Multi-Carrier NOMA-Empowered Wireless Federated Learning with Optimal
Power and Bandwidth Allocation
- URL: http://arxiv.org/abs/2302.06730v1
- Date: Mon, 13 Feb 2023 22:41:14 GMT
- Title: Multi-Carrier NOMA-Empowered Wireless Federated Learning with Optimal
Power and Bandwidth Allocation
- Authors: Weicai Li, Tiejun Lv, Yashuai Cao, Wei Ni, and Mugen Peng
- Abstract summary: Wireless federated learning (WFL) undergoes a bottleneck communication in uplink, limiting the number of users that can upload their local models in each global aggregation round.
This paper presents a new multi-carrier non-orthogonal multiple-access (MC-NOMA) WFL that allows the users to train different numbers of iterations per round.
As corroborated using a convolutional neural network and an 18-layer residential network, the proposed MC-NOMA WFL can efficiently reduce communication, increase local model training times, and accelerate the convergence by over 40%, compared to its existing alternative.
- Score: 31.80744279032665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Wireless federated learning (WFL) undergoes a communication bottleneck in
uplink, limiting the number of users that can upload their local models in each
global aggregation round. This paper presents a new multi-carrier
non-orthogonal multiple-access (MC-NOMA)-empowered WFL system under an adaptive
learning setting of Flexible Aggregation. Since a WFL round accommodates both
local model training and uploading for each user, the use of Flexible
Aggregation allows the users to train different numbers of iterations per
round, adapting to their channel conditions and computing resources. The key
idea is to use MC-NOMA to concurrently upload the local models of the users,
thereby extending the local model training times of the users and increasing
participating users. A new metric, namely, Weighted Global Proportion of
Trained Mini-batches (WGPTM), is analytically established to measure the
convergence of the new system. Another important aspect is that we maximize the
WGPTM to harness the convergence of the new system by jointly optimizing the
transmit powers and subchannel bandwidths. This nonconvex problem is converted
equivalently to a tractable convex problem and solved efficiently using
variable substitution and Cauchy's inequality. As corroborated experimentally
using a convolutional neural network and an 18-layer residential network, the
proposed MC-NOMA WFL can efficiently reduce communication delay, increase local
model training times, and accelerate the convergence by over 40%, compared to
its existing alternative.
Related papers
- Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Training Latency Minimization for Model-Splitting Allowed Federated Edge
Learning [16.8717239856441]
We propose a model-splitting allowed FL (SFL) framework to alleviate the shortage of computing power faced by clients in training deep neural networks (DNNs) using federated learning (FL)
Under the synchronized global update setting, the latency to complete a round of global training is determined by the maximum latency for the clients to complete a local training session.
To solve this mixed integer nonlinear programming problem, we first propose a regression method to fit the quantitative-relationship between the cut-layer and other parameters of an AI-model, and thus, transform the TLMP into a continuous problem.
arXiv Detail & Related papers (2023-07-21T12:26:42Z) - Joint Age-based Client Selection and Resource Allocation for
Communication-Efficient Federated Learning over NOMA Networks [8.030674576024952]
In federated learning (FL), distributed clients can collaboratively train a shared global model while retaining their own training data locally.
In this paper, a joint optimization problem of client selection and resource allocation is formulated, aiming to minimize the total time consumption of each round in FL over a non-orthogonal multiple access (NOMA) enabled wireless network.
In addition, a server-side artificial neural network (ANN) is proposed to predict the FL models of clients who are not selected at each round to further improve FL performance.
arXiv Detail & Related papers (2023-04-18T13:58:16Z) - Hierarchical Personalized Federated Learning Over Massive Mobile Edge
Computing Networks [95.39148209543175]
We propose hierarchical PFL (HPFL), an algorithm for deploying PFL over massive MEC networks.
HPFL combines the objectives of training loss minimization and round latency minimization while jointly determining the optimal bandwidth allocation.
arXiv Detail & Related papers (2023-03-19T06:00:05Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Joint Superposition Coding and Training for Federated Learning over
Multi-Width Neural Networks [52.93232352968347]
This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN)
FL preserves data privacy by exchanging the locally trained models of mobile devices. SNNs are however non-trivial, particularly under wireless connections with time-varying channel conditions.
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2021-12-05T11:17:17Z) - FedFog: Network-Aware Optimization of Federated Learning over Wireless
Fog-Cloud Systems [40.421253127588244]
Federated learning (FL) is capable of performing large distributed machine learning tasks across multiple edge users by periodically aggregating trained local parameters.
We first propose an efficient FL algorithm (called FedFog) to perform the local aggregation of gradient parameters at fog servers and global training update at the cloud.
arXiv Detail & Related papers (2021-07-04T08:03:15Z) - Convergence Time Optimization for Federated Learning over Wireless
Networks [160.82696473996566]
A wireless network is considered in which wireless users transmit their local FL models (trained using their locally collected data) to a base station (BS)
The BS, acting as a central controller, generates a global FL model using the received local FL models and broadcasts it back to all users.
Due to the limited number of resource blocks (RBs) in a wireless network, only a subset of users can be selected to transmit their local FL model parameters to the BS.
Since each user has unique training data samples, the BS prefers to include all local user FL models to generate a converged global FL model.
arXiv Detail & Related papers (2020-01-22T01:55:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.