Privacy Amplification for Federated Learning via User Sampling and
Wireless Aggregation
- URL: http://arxiv.org/abs/2103.01953v1
- Date: Tue, 2 Mar 2021 18:59:37 GMT
- Title: Privacy Amplification for Federated Learning via User Sampling and
Wireless Aggregation
- Authors: Mohamed Seif, Wei-Ting Chang, Ravi Tandon
- Abstract summary: We study the problem of federated learning over a wireless channel with user sampling.
We propose a private wireless gradient aggregation scheme, which relies on independently randomized participation decisions by each user.
- Score: 17.56067859013419
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the problem of federated learning over a wireless
channel with user sampling, modeled by a Gaussian multiple access channel,
subject to central and local differential privacy (DP/LDP) constraints. It has
been shown that the superposition nature of the wireless channel provides a
dual benefit of bandwidth efficient gradient aggregation, in conjunction with
strong DP guarantees for the users. Specifically, the central DP privacy
leakage has been shown to scale as $\mathcal{O}(1/K^{1/2})$, where $K$ is the
number of users. It has also been shown that user sampling coupled with
orthogonal transmission can enhance the central DP privacy leakage with the
same scaling behavior. In this work, we show that, by join incorporating both
wireless aggregation and user sampling, one can obtain even stronger privacy
guarantees. We propose a private wireless gradient aggregation scheme, which
relies on independently randomized participation decisions by each user. The
central DP leakage of our proposed scheme scales as $\mathcal{O}(1/K^{3/4})$.
In addition, we show that LDP is also boosted by user sampling. We also present
analysis for the convergence rate of the proposed scheme and study the
tradeoffs between wireless resources, convergence, and privacy theoretically
and empirically for two scenarios when the number of sampled participants are
$(a)$ known, or $(b)$ unknown at the parameter server.
Related papers
- Scalable DP-SGD: Shuffling vs. Poisson Subsampling [61.19794019914523]
We provide new lower bounds on the privacy guarantee of the multi-epoch Adaptive Linear Queries (ABLQ) mechanism with shuffled batch sampling.
We show substantial gaps when compared to Poisson subsampling; prior analysis was limited to a single epoch.
We introduce a practical approach to implement Poisson subsampling at scale using massively parallel computation.
arXiv Detail & Related papers (2024-11-06T19:06:16Z) - How Private are DP-SGD Implementations? [61.19794019914523]
We show that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
Our result shows that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
arXiv Detail & Related papers (2024-03-26T13:02:43Z) - (Private) Kernelized Bandits with Distributed Biased Feedback [13.312928989951505]
We study kernelized bandits with distributed biased feedback.
New emphdistributed phase-then-batch-based elimination (textttDPBE) algorithm is proposed.
We show that textttDPBE achieves a sublinear regret of $tildeO(T1-alpha/2+sqrtgamma_T T)$, where $alphain (0,1)$ is the user-sampling parameter one can tune.
arXiv Detail & Related papers (2023-01-28T02:30:15Z) - Discrete Distribution Estimation under User-level Local Differential
Privacy [37.65849910114053]
We study discrete distribution estimation under user-level local differential privacy (LDP)
In user-level $varepsilon$-LDP, each user has $mge1$ samples and the privacy of all $m$ samples must be preserved simultaneously.
arXiv Detail & Related papers (2022-11-07T18:29:32Z) - On Differential Privacy for Federated Learning in Wireless Systems with
Multiple Base Stations [90.53293906751747]
We consider a federated learning model in a wireless system with multiple base stations and inter-cell interference.
We show the convergence behavior of the learning process by deriving an upper bound on its optimality gap.
Our proposed scheduler improves the average accuracy of the predictions compared with a random scheduler.
arXiv Detail & Related papers (2022-08-25T03:37:11Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - Federated Stochastic Primal-dual Learning with Differential Privacy [15.310299472656533]
We propose a new federated primal-dual algorithm with differential privacy (FedSPDDP)
Our analysis shows that the data sampling strategy and PCP can enhance the data privacy whereas the larger number of local SGD steps could increase privacy leakage.
Experiment results are presented to evaluate the practical performance of the proposed algorithm.
arXiv Detail & Related papers (2022-04-26T13:10:37Z) - Differentially Private Federated Learning on Heterogeneous Data [10.431137628048356]
Federated Learning (FL) is a paradigm for large-scale distributed learning.
It faces two key challenges: (i) efficient training from highly heterogeneous user data, and (ii) protecting the privacy of participating users.
We propose a novel FL approach to tackle these two challenges together by incorporating Differential Privacy (DP) constraints.
arXiv Detail & Related papers (2021-11-17T18:23:49Z) - User-Level Private Learning via Correlated Sampling [49.453751858361265]
We consider the setting where each user holds $m$ samples and the privacy protection is enforced at the level of each user's data.
We show that, in this setting, we may learn with a much fewer number of users.
arXiv Detail & Related papers (2021-10-21T15:33:53Z) - Privacy Amplification via Random Check-Ins [38.72327434015975]
Differentially Private Gradient Descent (DP-SGD) forms a fundamental building block in many applications for learning over sensitive data.
In this paper, we focus on conducting iterative methods like DP-SGD in the setting of federated learning (FL) wherein the data is distributed among many devices (clients)
Our main contribution is the emphrandom check-in distributed protocol, which crucially relies only on randomized participation decisions made locally and independently by each client.
arXiv Detail & Related papers (2020-07-13T18:14:09Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.