Channel and Gradient-Importance Aware Device Scheduling for Over-the-Air
Federated Learning
- URL: http://arxiv.org/abs/2305.16854v4
- Date: Thu, 23 Nov 2023 05:25:19 GMT
- Title: Channel and Gradient-Importance Aware Device Scheduling for Over-the-Air
Federated Learning
- Authors: Yuchang Sun and Zehong lin and Yuyi Mao and Shi Jin and Jun Zhang
- Abstract summary: Federated learning (FL) is a privacy-preserving distributed training scheme.
We propose a device scheduling framework for over-the-air FL, named PO-FL, to mitigate the negative impact of channel noise distortion.
- Score: 31.966999085992505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a popular privacy-preserving distributed training
scheme, where multiple devices collaborate to train machine learning models by
uploading local model updates. To improve communication efficiency,
over-the-air computation (AirComp) has been applied to FL, which leverages
analog modulation to harness the superposition property of radio waves such
that numerous devices can upload their model updates concurrently for
aggregation. However, the uplink channel noise incurs considerable model
aggregation distortion, which is critically determined by the device scheduling
and compromises the learned model performance. In this paper, we propose a
probabilistic device scheduling framework for over-the-air FL, named PO-FL, to
mitigate the negative impact of channel noise, where each device is scheduled
according to a certain probability and its model update is reweighted using
this probability in aggregation. We prove the unbiasedness of this aggregation
scheme and demonstrate the convergence of PO-FL on both convex and non-convex
loss functions. Our convergence bounds unveil that the device scheduling
affects the learning performance through the communication distortion and
global update variance. Based on the convergence analysis, we further develop a
channel and gradient-importance aware algorithm to optimize the device
scheduling probabilities in PO-FL. Extensive simulation results show that the
proposed PO-FL framework with channel and gradient-importance awareness
achieves faster convergence and produces better models than baseline methods.
Related papers
- Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Scheduling and Aggregation Design for Asynchronous Federated Learning
over Wireless Networks [56.91063444859008]
Federated Learning (FL) is a collaborative machine learning framework that combines on-device training and server-based aggregation.
We propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems.
We show that an age-aware'' aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
arXiv Detail & Related papers (2022-12-14T17:33:01Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Resource-Efficient and Delay-Aware Federated Learning Design under Edge
Heterogeneity [10.702853653891902]
Federated learning (FL) has emerged as a popular methodology for distributing machine learning across wireless edge devices.
In this work, we consider optimizing the tradeoff between model performance and resource utilization in FL.
Our proposed StoFedDelAv incorporates a localglobal model combiner into the FL computation step.
arXiv Detail & Related papers (2021-12-27T22:30:15Z) - Over-the-Air Federated Learning with Retransmissions (Extended Version) [21.37147806100865]
We study the impact of estimation errors on the convergence of Federated Learning (FL) over resource-constrained wireless networks.
We propose retransmissions as a method to improve FL convergence over resource-constrained wireless networks.
arXiv Detail & Related papers (2021-11-19T15:17:15Z) - User Scheduling for Federated Learning Through Over-the-Air Computation [22.853678584121862]
A new machine learning technique termed as federated learning (FL) aims to preserve data at the edge devices and to only exchange ML model parameters in the learning process.
FL not only reduces the communication needs but also helps to protect the local privacy.
AirComp is capable of computing while transmitting data by allowing multiple devices to send data simultaneously by using analog modulation.
arXiv Detail & Related papers (2021-08-05T23:58:15Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.