On Differential Privacy for Federated Learning in Wireless Systems with
Multiple Base Stations
- URL: http://arxiv.org/abs/2208.11848v1
- Date: Thu, 25 Aug 2022 03:37:11 GMT
- Title: On Differential Privacy for Federated Learning in Wireless Systems with
Multiple Base Stations
- Authors: Nima Tavangaran, Mingzhe Chen, Zhaohui Yang, Jos\'e Mairton B. Da
Silva Jr., H. Vincent Poor
- Abstract summary: We consider a federated learning model in a wireless system with multiple base stations and inter-cell interference.
We show the convergence behavior of the learning process by deriving an upper bound on its optimality gap.
Our proposed scheduler improves the average accuracy of the predictions compared with a random scheduler.
- Score: 90.53293906751747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we consider a federated learning model in a wireless system
with multiple base stations and inter-cell interference. We apply a
differential private scheme to transmit information from users to their
corresponding base station during the learning phase. We show the convergence
behavior of the learning process by deriving an upper bound on its optimality
gap. Furthermore, we define an optimization problem to reduce this upper bound
and the total privacy leakage. To find the locally optimal solutions of this
problem, we first propose an algorithm that schedules the resource blocks and
users. We then extend this scheme to reduce the total privacy leakage by
optimizing the differential privacy artificial noise. We apply the solutions of
these two procedures as parameters of a federated learning system. In this
setting, we assume that each user is equipped with a classifier. Moreover, the
communication cells are assumed to have mostly fewer resource blocks than
numbers of users. The simulation results show that our proposed scheduler
improves the average accuracy of the predictions compared with a random
scheduler. Furthermore, its extended version with noise optimizer significantly
reduces the amount of privacy leakage.
Related papers
- Dynamic Privacy Allocation for Locally Differentially Private Federated
Learning with Composite Objectives [10.528569272279999]
This paper proposes a differentially private federated learning algorithm for strongly convex but possibly nonsmooth problems.
The proposed algorithm adds artificial noise to the shared information to ensure privacy and dynamically allocates the time-varying noise variance to minimize an upper bound of the optimization error.
Numerical results show the superiority of the proposed algorithm over state-of-the-art methods.
arXiv Detail & Related papers (2023-08-02T13:30:33Z) - Differentially Private Over-the-Air Federated Learning Over MIMO Fading
Channels [24.534729104570417]
Federated learning (FL) enables edge devices to collaboratively train machine learning models.
While over-the-air model aggregation improves communication efficiency, uploading models to an edge server over wireless networks can pose privacy risks.
We show that FL model communication with a multiple-antenna server amplifies privacy leakage.
arXiv Detail & Related papers (2023-06-19T14:44:34Z) - Over-the-Air Federated Averaging with Limited Power and Privacy Budgets [49.04036552090802]
This paper studies a private over-the-air federated averaging (DP-OTA-FedAvg) system with a limited sum power budget.
We aim to improve the analytical problem to minimize the gap of the DP-OTA-FedAvg coefficient to minimize privacy functions.
arXiv Detail & Related papers (2023-05-05T13:56:40Z) - sqSGD: Locally Private and Communication Efficient Federated Learning [14.60645909629309]
Federated learning (FL) is a technique that trains machine learning models from decentralized data sources.
We develop a gradient-based learning algorithm called sqSGD that addresses communication efficiency and high-dimensional compatibility.
Experiment results show sqSGD successfully learns large models like LeNet and ResNet with local privacy constraints.
arXiv Detail & Related papers (2022-06-21T17:45:35Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Don't Generate Me: Training Differentially Private Generative Models
with Sinkhorn Divergence [73.14373832423156]
We propose DP-Sinkhorn, a novel optimal transport-based generative method for learning data distributions from private data with differential privacy.
Unlike existing approaches for training differentially private generative models, we do not rely on adversarial objectives.
arXiv Detail & Related papers (2021-11-01T18:10:21Z) - Low-Latency Federated Learning over Wireless Channels with Differential
Privacy [142.5983499872664]
In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server.
In this paper, we aim to minimize FL training delay over wireless channels, constrained by overall training performance as well as each client's differential privacy (DP) requirement.
arXiv Detail & Related papers (2021-06-20T13:51:18Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - Privacy Amplification via Random Check-Ins [38.72327434015975]
Differentially Private Gradient Descent (DP-SGD) forms a fundamental building block in many applications for learning over sensitive data.
In this paper, we focus on conducting iterative methods like DP-SGD in the setting of federated learning (FL) wherein the data is distributed among many devices (clients)
Our main contribution is the emphrandom check-in distributed protocol, which crucially relies only on randomized participation decisions made locally and independently by each client.
arXiv Detail & Related papers (2020-07-13T18:14:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.