Over-the-Air Federated Averaging with Limited Power and Privacy Budgets
- URL: http://arxiv.org/abs/2305.03547v1
- Date: Fri, 5 May 2023 13:56:40 GMT
- Title: Over-the-Air Federated Averaging with Limited Power and Privacy Budgets
- Authors: Na Yan, Kezhi Wang, Cunhua Pan, Kok Keong Chai, Feng Shu, and
Jiangzhou Wang
- Abstract summary: This paper studies a private over-the-air federated averaging (DP-OTA-FedAvg) system with a limited sum power budget.
We aim to improve the analytical problem to minimize the gap of the DP-OTA-FedAvg coefficient to minimize privacy functions.
- Score: 49.04036552090802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To jointly overcome the communication bottleneck and privacy leakage of
wireless federated learning (FL), this paper studies a differentially private
over-the-air federated averaging (DP-OTA-FedAvg) system with a limited sum
power budget. With DP-OTA-FedAvg, the gradients are aligned by an alignment
coefficient and aggregated over the air, and channel noise is employed to
protect privacy. We aim to improve the learning performance by jointly
designing the device scheduling, alignment coefficient, and the number of
aggregation rounds of federated averaging (FedAvg) subject to sum power and
privacy constraints. We first present the privacy analysis based on
differential privacy (DP) to quantify the impact of the alignment coefficient
on privacy preservation in each communication round. Furthermore, to study how
the device scheduling, alignment coefficient, and the number of the global
aggregation affect the learning process, we conduct the convergence analysis of
DP-OTA-FedAvg in the cases of convex and non-convex loss functions. Based on
these analytical results, we formulate an optimization problem to minimize the
optimality gap of the DP-OTA-FedAvg subject to limited sum power and privacy
budgets. The problem is solved by decoupling it into two sub-problems. Given
the number of communication rounds, we conclude the relationship between the
number of scheduled devices and the alignment coefficient, which offers a set
of potential optimal solution pairs of device scheduling and the alignment
coefficient. Thanks to the reduced search space, the optimal solution can be
efficiently obtained. The effectiveness of the proposed policy is validated
through simulations.
Related papers
- Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Integrated Sensing, Computation, and Communication for UAV-assisted
Federated Edge Learning [52.7230652428711]
Federated edge learning (FEEL) enables privacy-preserving model training through periodic communication between edge devices and the server.
Unmanned Aerial Vehicle (UAV)mounted edge devices are particularly advantageous for FEEL due to their flexibility and mobility in efficient data collection.
arXiv Detail & Related papers (2023-06-05T16:01:33Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Gradient Sparsification for Efficient Wireless Federated Learning with
Differential Privacy [25.763777765222358]
Federated learning (FL) enables distributed clients to collaboratively train a machine learning model without sharing raw data with each other.
As the model size grows, the training latency due to limited transmission bandwidth and private information degrades while using differential privacy (DP) protection.
We propose sparsification empowered FL framework wireless channels, in over to improve training efficiency without sacrificing convergence performance.
arXiv Detail & Related papers (2023-04-09T05:21:15Z) - On Differential Privacy for Federated Learning in Wireless Systems with
Multiple Base Stations [90.53293906751747]
We consider a federated learning model in a wireless system with multiple base stations and inter-cell interference.
We show the convergence behavior of the learning process by deriving an upper bound on its optimality gap.
Our proposed scheduler improves the average accuracy of the predictions compared with a random scheduler.
arXiv Detail & Related papers (2022-08-25T03:37:11Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Wireless Federated Learning with Limited Communication and Differential
Privacy [21.328507360172203]
This paper investigates the role of dimensionality reduction in efficient communication and differential privacy (DP) of the local datasets at the remote users for over-the-air computation (AirComp)-based federated learning (FL) model.
arXiv Detail & Related papers (2021-06-01T15:23:12Z) - Federated Learning with Sparsification-Amplified Privacy and Adaptive
Optimization [27.243322019117144]
Federated learning (FL) enables distributed agents to collaboratively learn a centralized model without sharing their raw data with each other.
We propose a new FL framework with sparsification-amplified privacy.
Our approach integrates random sparsification with gradient perturbation on each agent to amplify privacy guarantee.
arXiv Detail & Related papers (2020-08-01T20:22:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.