Gradient Statistics Aware Power Control for Over-the-Air Federated
Learning
- URL: http://arxiv.org/abs/2003.02089v3
- Date: Wed, 25 Nov 2020 10:36:35 GMT
- Title: Gradient Statistics Aware Power Control for Over-the-Air Federated
Learning
- Authors: Naifu Zhang and Meixia Tao
- Abstract summary: Federated learning (FL) is a promising technique that enables many edge devices to train a machine learning model collaboratively in wireless networks.
This paper studies the power control problem for over-the-air FL by taking gradient statistics into account.
- Score: 59.40860710441232
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a promising technique that enables many edge
devices to train a machine learning model collaboratively in wireless networks.
By exploiting the superposition nature of wireless waveforms, over-the-air
computation (AirComp) can accelerate model aggregation and hence facilitate
communication-efficient FL. Due to channel fading, power control is crucial in
AirComp. Prior works assume that the signals to be aggregated from each device,
i.e., local gradients have identical statistics. In FL, however, gradient
statistics vary over both training iterations and feature dimensions, and are
unknown in advance. This paper studies the power control problem for
over-the-air FL by taking gradient statistics into account. The goal is to
minimize the aggregation error by optimizing the transmit power at each device
subject to peak power constraints. We obtain the optimal policy in closed form
when gradient statistics are given. Notably, we show that the optimal transmit
power is continuous and monotonically decreases with the squared multivariate
coefficient of variation (SMCV) of gradient vectors. We then propose a method
to estimate gradient statistics with negligible communication cost.
Experimental results demonstrate that the proposed gradient-statistics-aware
power control achieves higher test accuracy than the existing schemes for a
wide range of scenarios.
Related papers
- Communication and Energy Efficient Federated Learning using Zero-Order Optimization Technique [14.986031916712108]
Federated learning (FL) is a popular machine learning technique that enables multiple users to collaboratively train a model while maintaining the user data privacy.
A significant challenge in FL is the communication bottleneck in the upload direction, and thus the corresponding energy consumption of the devices.
We show the superiority of our method, in terms of communication overhead and energy, as compared to standard gradient-based FL methods.
arXiv Detail & Related papers (2024-09-24T20:57:22Z) - Rendering Wireless Environments Useful for Gradient Estimators: A Zero-Order Stochastic Federated Learning Method [14.986031916712108]
Cross-device federated learning (FL) is a growing machine learning framework whereby multiple edge devices collaborate to train a model without disclosing their raw data.
We show how to harness the wireless channel in the learning algorithm itself instead of to analyze it remove its impact.
arXiv Detail & Related papers (2024-01-30T21:46:09Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Channel and Gradient-Importance Aware Device Scheduling for Over-the-Air
Federated Learning [31.966999085992505]
Federated learning (FL) is a privacy-preserving distributed training scheme.
We propose a device scheduling framework for over-the-air FL, named PO-FL, to mitigate the negative impact of channel noise distortion.
arXiv Detail & Related papers (2023-05-26T12:04:59Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Resource-Efficient and Delay-Aware Federated Learning Design under Edge
Heterogeneity [10.702853653891902]
Federated learning (FL) has emerged as a popular methodology for distributing machine learning across wireless edge devices.
In this work, we consider optimizing the tradeoff between model performance and resource utilization in FL.
Our proposed StoFedDelAv incorporates a localglobal model combiner into the FL computation step.
arXiv Detail & Related papers (2021-12-27T22:30:15Z) - Over-the-Air Federated Learning with Retransmissions (Extended Version) [21.37147806100865]
We study the impact of estimation errors on the convergence of Federated Learning (FL) over resource-constrained wireless networks.
We propose retransmissions as a method to improve FL convergence over resource-constrained wireless networks.
arXiv Detail & Related papers (2021-11-19T15:17:15Z) - Bayesian Federated Learning over Wireless Networks [87.37301441859925]
Federated learning is a privacy-preserving and distributed training method using heterogeneous data sets stored at local devices.
This paper presents an efficient modified BFL algorithm called scalableBFL (SBFL)
arXiv Detail & Related papers (2020-12-31T07:32:44Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.