Wireless Federated Learning (WFL) for 6G Networks -- Part II: The
Compute-then-Transmit NOMA Paradigm
- URL: http://arxiv.org/abs/2104.12005v1
- Date: Sat, 24 Apr 2021 19:14:28 GMT
- Title: Wireless Federated Learning (WFL) for 6G Networks -- Part II: The
Compute-then-Transmit NOMA Paradigm
- Authors: Pavlos S. Bouzinis, Panagiotis D. Diamantoulakis, George K.
Karagiannidis
- Abstract summary: We introduce and optimize a novel communication protocol for wireless federated learning (WFL) networks.
The Compute-then-Transmit NOMA (CT-NOMA) protocol is introduced, where users terminate concurrently the local model training and then simultaneously transmit the trained parameters to the central server.
Two different detection schemes for the mitigation of inter-user interference in NOMA are considered and evaluated, which correspond to fixed and variable decoding order.
- Score: 43.273644277347465
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As it has been discussed in the first part of this work, the utilization of
advanced multiple access protocols and the joint optimization of the
communication and computing resources can facilitate the reduction of delay for
wireless federated learning (WFL), which is of paramount importance for the
efficient integration of WFL in the sixth generation of wireless networks (6G).
To this end, in this second part we introduce and optimize a novel
communication protocol for WFL networks, that is based on non-orthogonal
multiple access (NOMA). More specifically, the Compute-then-Transmit NOMA
(CT-NOMA) protocol is introduced, where users terminate concurrently the local
model training and then simultaneously transmit the trained parameters to the
central server. Moreover, two different detection schemes for the mitigation of
inter-user interference in NOMA are considered and evaluated, which correspond
to fixed and variable decoding order during the successive interference
cancellation process. Furthermore, the computation and communication resources
are jointly optimized for both considered schemes, with the aim to minimize the
total delay during a WFL communication round. Finally, the simulation results
verify the effectiveness of CT-NOMA in terms of delay reduction, compared to
the considered benchmark that is based on time-division multiple access.
Related papers
- Orchestrating Multimodal DNN Workloads in Wireless Neural Processing [57.510786937781866]
In edge inference, wireless resource allocation and accelerator deep neural computation (DNN) scheduling have yet to be co-optimized in an end-to-end manner.<n>This paper investigates a paradigm that integrates wireless transmission and multi-core execution into a unified end-to-end pipeline.
arXiv Detail & Related papers (2026-03-02T17:25:43Z) - Communication-Efficient Federated Learning by Quantized Variance Reduction for Heterogeneous Wireless Edge Networks [55.467288506826755]
Federated learning (FL) has been recognized as a viable solution for local-privacy-aware collaborative model training in wireless edge networks.
Most existing communication-efficient FL algorithms fail to reduce the significant inter-device variance.
We propose a novel communication-efficient FL algorithm, named FedQVR, which relies on a sophisticated variance-reduced scheme.
arXiv Detail & Related papers (2025-01-20T04:26:21Z) - High Efficiency Inference Accelerating Algorithm for NOMA-based Mobile
Edge Computing [23.88527790721402]
Splitting the inference model between device, edge server, and cloud can improve the performance of EI greatly.
NOMA, which is the key supporting technologies of B5G/6G, can achieve massive connections and high spectrum efficiency.
We propose the effective communication and computing resource allocation algorithm to accelerate the model inference at edge.
arXiv Detail & Related papers (2023-12-26T02:05:52Z) - Wirelessly Powered Federated Learning Networks: Joint Power Transfer,
Data Sensing, Model Training, and Resource Allocation [24.077525032187893]
Federated learning (FL) has found many successes in wireless networks.
implementation of FL has been hindered by the energy limitation of mobile devices (MDs) and the availability of training data at MDs.
How to integrate wireless power transfer and sustainable sustainable FL networks.
arXiv Detail & Related papers (2023-08-09T13:38:58Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Completion Time Minimization of Fog-RAN-Assisted Federated Learning With
Rate-Splitting Transmission [21.397106355171946]
This work studies federated learning over a fog radio access network, in which multiple internet-of-things (IoT) devices cooperatively learn a shared machine learning model by communicating with a cloud server (CS) through distributed access points (APs)
Under the assumption that the fronthaul links connecting APs to CS have finite capacity, a rate-splitting transmission at IoT devices (IDs) is proposed which enables hybrid edge and cloud decoding of split uplink messages.
Numerical results show that the proposed rate-splitting transmission achieves notable gains over benchmark schemes which rely solely on edge or cloud decoding.
arXiv Detail & Related papers (2022-06-03T02:53:19Z) - Deep Learning-Based Synchronization for Uplink NB-IoT [72.86843435313048]
We propose a neural network (NN)-based algorithm for device detection and time of arrival (ToA) estimation for the narrowband physical random-access channel (NPRACH) of narrowband internet of things (NB-IoT)
The introduced NN architecture leverages residual convolutional networks as well as knowledge of the preamble structure of the 5G New Radio (5G NR) specifications.
arXiv Detail & Related papers (2022-05-22T12:16:43Z) - Federated Learning for Energy-limited Wireless Networks: A Partial Model
Aggregation Approach [79.59560136273917]
limited communication resources, bandwidth and energy, and data heterogeneity across devices are main bottlenecks for federated learning (FL)
We first devise a novel FL framework with partial model aggregation (PMA)
The proposed PMA-FL improves 2.72% and 11.6% accuracy on two typical heterogeneous datasets.
arXiv Detail & Related papers (2022-04-20T19:09:52Z) - Over-the-Air Federated Learning via Second-Order Optimization [37.594140209854906]
Federated learning (FL) could result in task-oriented data traffic flows over wireless networks with limited radio resources.
We propose a novel over-the-air second-order federated optimization algorithm to simultaneously reduce the communication rounds and enable low-latency global model aggregation.
arXiv Detail & Related papers (2022-03-29T12:39:23Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.