Analog-digital Scheduling for Federated Learning: A
Communication-Efficient Approach
- URL: http://arxiv.org/abs/2402.00318v2
- Date: Fri, 2 Feb 2024 17:08:39 GMT
- Title: Analog-digital Scheduling for Federated Learning: A
Communication-Efficient Approach
- Authors: Muhammad Faraz Ul Abrar and Nicol\`o Michelusi
- Abstract summary: Over-the-air (OTA) computation has recently emerged as a communication-efficient Federated Learning (FL) paradigm to train machine learning models over wireless networks.
However, its performance is limited by the device with the worst SNR, resulting in fast yet noisy updates.
We present a novel Analog-Digital ADFL scheme to mitigate the noise problem.
Our simulation results show that ADFL, by scheduling most of the devices in the OTA scheme while also occasionally employing the digital scheme for a few devices, consistently outperforms OTA-only and digital-only schemes.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over-the-air (OTA) computation has recently emerged as a
communication-efficient Federated Learning (FL) paradigm to train machine
learning models over wireless networks. However, its performance is limited by
the device with the worst SNR, resulting in fast yet noisy updates. On the
other hand, allocating orthogonal resource blocks (RB) to individual devices
via digital channels mitigates the noise problem, at the cost of increased
communication latency. In this paper, we address this discrepancy and present
ADFL, a novel Analog-Digital FL scheme: in each round, the parameter server
(PS) schedules each device to either upload its gradient via the analog OTA
scheme or transmit its quantized gradient over an orthogonal RB using the
``digital" scheme. Focusing on a single FL round, we cast the optimal
scheduling problem as the minimization of the mean squared error (MSE) on the
estimated global gradient at the PS, subject to a delay constraint, yielding
the optimal device scheduling configuration and quantization bits for the
digital devices. Our simulation results show that ADFL, by scheduling most of
the devices in the OTA scheme while also occasionally employing the digital
scheme for a few devices, consistently outperforms OTA-only and digital-only
schemes, in both i.i.d. and non-i.i.d. settings.
Related papers
- Learned Digital Codes for Over-the-Air Computation in Federated Edge Learning [42.73991180442414]
Federated edge learning (FEEL) enables wireless devices to collaboratively train a centralised model without sharing raw data.<n>OTA aggregation alleviates this by exploiting the superposition property of the wireless channel, enabling simultaneous transmission and merging communication with computation.<n>This work proposes a learned digital OTA framework that improves recovery accuracy, convergence behaviour, and robustness to challenging SNR conditions.
arXiv Detail & Related papers (2025-12-22T15:01:41Z) - Lightweight Federated Learning over Wireless Edge Networks [83.4818741890634]
Federated (FL) is an alternative at network edge, but an alternative in wireless networks.<n>We derive a closed-form expression FL convergence gap transmission power, model pruning error, and quantization.<n> LTFL outperforms state-the-art schemes in experiments on real-world datasets.
arXiv Detail & Related papers (2025-07-13T09:14:17Z) - Biased Federated Learning under Wireless Heterogeneity [7.3716675761469945]
Federated learning (FL) is a promising framework for computation, enabling collaborative model training without sharing private data.
Existing wireless computation works primarily adopt two communication strategies: (1) over-the-air (OTA) which exploits wireless signal superposition, and (2) over-the-air (OTA) which allocates resources for convergence.
This paper proposes novel OTA and digital FL updates that allow a structured, time-in-place bias, thereby reducing variance in FL updates.
arXiv Detail & Related papers (2025-03-08T05:55:14Z) - Digital Twin-Assisted Federated Learning with Blockchain in Multi-tier Computing Systems [67.14406100332671]
In Industry 4.0 systems, resource-constrained edge devices engage in frequent data interactions.
This paper proposes a digital twin (DT) and federated digital twin (FL) scheme.
The efficacy of our proposed cooperative interference-based FL process has been verified through numerical analysis.
arXiv Detail & Related papers (2024-11-04T17:48:02Z) - Digital versus Analog Transmissions for Federated Learning over Wireless
Networks [91.20926827568053]
We compare two effective communication schemes for wireless federated learning (FL) over resource-constrained networks.
We first examine both digital and analog transmission methods, together with a unified and fair comparison scheme under practical constraints.
A universal convergence analysis under various imperfections is established for FL performance evaluation in wireless networks.
arXiv Detail & Related papers (2024-02-15T01:50:46Z) - Channel and Gradient-Importance Aware Device Scheduling for Over-the-Air
Federated Learning [31.966999085992505]
Federated learning (FL) is a privacy-preserving distributed training scheme.
We propose a device scheduling framework for over-the-air FL, named PO-FL, to mitigate the negative impact of channel noise distortion.
arXiv Detail & Related papers (2023-05-26T12:04:59Z) - Gradient Sparsification for Efficient Wireless Federated Learning with
Differential Privacy [25.763777765222358]
Federated learning (FL) enables distributed clients to collaboratively train a machine learning model without sharing raw data with each other.
As the model size grows, the training latency due to limited transmission bandwidth and private information degrades while using differential privacy (DP) protection.
We propose sparsification empowered FL framework wireless channels, in over to improve training efficiency without sacrificing convergence performance.
arXiv Detail & Related papers (2023-04-09T05:21:15Z) - Digital Over-the-Air Federated Learning in Multi-Antenna Systems [30.137208705209627]
We study the performance optimization of federated learning (FL) over a realistic wireless communication system with digital modulation and over-the-air computation (AirComp)
We propose a modified federated averaging (FedAvg) algorithm that combines digital modulation with AirComp to mitigate wireless fading while ensuring the communication efficiency.
An artificial neural network (ANN) is used to estimate the local FL models of all devices and adjust the beamforming matrices at the PS for future model transmission.
arXiv Detail & Related papers (2023-02-04T07:26:06Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid
Precoding [94.40747235081466]
We propose an end-to-end deep learning-based joint transceiver design algorithm for millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems.
We develop a DNN architecture that maps the received pilots into feedback bits at the receiver, and then further maps the feedback bits into the hybrid precoder at the transmitter.
arXiv Detail & Related papers (2021-10-22T20:49:02Z) - Unit-Modulus Wireless Federated Learning Via Penalty Alternating
Minimization [64.76619508293966]
Wireless federated learning (FL) is an emerging machine learning paradigm that trains a global parametric model from distributed datasets via wireless communications.
This paper proposes a wireless FL framework, which uploads local model parameters and computes global model parameters via wireless communications.
arXiv Detail & Related papers (2021-08-31T08:19:54Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z) - Joint Device Scheduling and Resource Allocation for Latency Constrained
Wireless Federated Learning [26.813145949399427]
In federated learning (FL), devices upload their local model updates via wireless channels.
We propose a joint device scheduling and resource allocation policy to maximize the model accuracy.
Experiments show that the proposed policy outperforms state-of-the-art scheduling policies.
arXiv Detail & Related papers (2020-07-14T16:46:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.