Completion Time Minimization of Fog-RAN-Assisted Federated Learning With
Rate-Splitting Transmission
- URL: http://arxiv.org/abs/2206.01373v1
- Date: Fri, 3 Jun 2022 02:53:19 GMT
- Title: Completion Time Minimization of Fog-RAN-Assisted Federated Learning With
Rate-Splitting Transmission
- Authors: Seok-Hwan Park and Hoon Lee
- Abstract summary: This work studies federated learning over a fog radio access network, in which multiple internet-of-things (IoT) devices cooperatively learn a shared machine learning model by communicating with a cloud server (CS) through distributed access points (APs)
Under the assumption that the fronthaul links connecting APs to CS have finite capacity, a rate-splitting transmission at IoT devices (IDs) is proposed which enables hybrid edge and cloud decoding of split uplink messages.
Numerical results show that the proposed rate-splitting transmission achieves notable gains over benchmark schemes which rely solely on edge or cloud decoding.
- Score: 21.397106355171946
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work studies federated learning (FL) over a fog radio access network, in
which multiple internet-of-things (IoT) devices cooperatively learn a shared
machine learning model by communicating with a cloud server (CS) through
distributed access points (APs). Under the assumption that the fronthaul links
connecting APs to CS have finite capacity, a rate-splitting transmission at IoT
devices (IDs) is proposed which enables hybrid edge and cloud decoding of split
uplink messages. The problem of completion time minimization for FL is tackled
by optimizing the rate-splitting transmission and fronthaul quantization
strategies along with training hyperparameters such as precision and iteration
numbers. Numerical results show that the proposed rate-splitting transmission
achieves notable gains over benchmark schemes which rely solely on edge or
cloud decoding.
Related papers
- A Novel Collaborative Framework for Efficient Synchronization in Split Federated Learning over Wireless Networks [4.462403784684656]
We propose a new framework, called Collaborative Split Federated Learning (CSFL), that redefines workload redistribution through device-to-device collaboration.<n>CSFL enables efficient devices, after completing their own forward propagation, to seamlessly take over the unfinished layers of bottleneck devices.<n>This collaborative process, supported by D2D communications, allows bottleneck devices to offload earlier while maintaining synchronized progression across the network.
arXiv Detail & Related papers (2025-03-18T22:11:54Z) - Communication-Efficient Federated Learning by Quantized Variance Reduction for Heterogeneous Wireless Edge Networks [55.467288506826755]
Federated learning (FL) has been recognized as a viable solution for local-privacy-aware collaborative model training in wireless edge networks.
Most existing communication-efficient FL algorithms fail to reduce the significant inter-device variance.
We propose a novel communication-efficient FL algorithm, named FedQVR, which relies on a sophisticated variance-reduced scheme.
arXiv Detail & Related papers (2025-01-20T04:26:21Z) - Federated Split Learning with Model Pruning and Gradient Quantization in Wireless Networks [7.439160287320074]
Federated split learning (FedSL) implements collaborative training across the edge devices and the server through model splitting.
We propose a lightweight FedSL scheme, that further alleviates the training burden on resource-constrained edge devices.
We conduct theoretical analysis to quantify the convergence performance of the proposed scheme.
arXiv Detail & Related papers (2024-12-09T11:43:03Z) - Efficient Asynchronous Federated Learning with Sparsification and
Quantization [55.6801207905772]
Federated Learning (FL) is attracting more and more attention to collaboratively train a machine learning model without transferring raw data.
FL generally exploits a parameter server and a large number of edge devices during the whole process of the model training.
We propose TEASQ-Fed to exploit edge devices to asynchronously participate in the training process by actively applying for tasks.
arXiv Detail & Related papers (2023-12-23T07:47:07Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - Scheduling and Aggregation Design for Asynchronous Federated Learning
over Wireless Networks [56.91063444859008]
Federated Learning (FL) is a collaborative machine learning framework that combines on-device training and server-based aggregation.
We propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems.
We show that an age-aware'' aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
arXiv Detail & Related papers (2022-12-14T17:33:01Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Resource Allocation for Compression-aided Federated Learning with High
Distortion Rate [3.7530276852356645]
We formulate an optimization-aided FL problem between the distortion rate, number of participating IoT devices, and convergence rate.
By actively controlling participating IoT devices, we can avoid the training divergence of compression-aided FL while maintaining the communication efficiency.
arXiv Detail & Related papers (2022-06-02T05:00:37Z) - Wireless Federated Learning (WFL) for 6G Networks -- Part II: The
Compute-then-Transmit NOMA Paradigm [43.273644277347465]
We introduce and optimize a novel communication protocol for wireless federated learning (WFL) networks.
The Compute-then-Transmit NOMA (CT-NOMA) protocol is introduced, where users terminate concurrently the local model training and then simultaneously transmit the trained parameters to the central server.
Two different detection schemes for the mitigation of inter-user interference in NOMA are considered and evaluated, which correspond to fixed and variable decoding order.
arXiv Detail & Related papers (2021-04-24T19:14:28Z) - Federated Learning with Communication Delay in Edge Networks [5.500965885412937]
Federated learning has received significant attention as a potential solution for distributing machine learning (ML) model training through edge networks.
This work addresses an important consideration of federated learning at the network edge: communication delays between the edge nodes and the aggregator.
A technique called FedDelAvg (federated delayed averaging) is developed, which generalizes the standard federated averaging algorithm to incorporate a weighting between the current local model and the delayed global model received at each device during the synchronization step.
arXiv Detail & Related papers (2020-08-21T06:21:35Z) - Scheduling Policy and Power Allocation for Federated Learning in NOMA
Based MEC [21.267954799102874]
Federated learning (FL) is a highly pursued machine learning technique that can train a model centrally while keeping data distributed.
We propose a new scheduling policy and power allocation scheme using non-orthogonal multiple access (NOMA) settings to maximize the weighted sum data rate.
Simulation results show that the proposed scheduling and power allocation scheme can help achieve a higher FL testing accuracy in NOMA based wireless networks.
arXiv Detail & Related papers (2020-06-21T23:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.