Shuffle Gaussian Mechanism for Differential Privacy
- URL: http://arxiv.org/abs/2206.09569v1
- Date: Mon, 20 Jun 2022 04:54:16 GMT
- Title: Shuffle Gaussian Mechanism for Differential Privacy
- Authors: Seng Pei Liew, Tsubasa Takahashi
- Abstract summary: We study the mechanism's R'enyi differential privacy (RDP), showing that it is of the form: $$ epsilon(lambda) leq frac1lambda-1logleft(frace-da/2sigma2ndasum_substackk_+dotsc+k_n=lambda;k_nlambda!k_nlambda!k_nlambda!k_nlambda!
- Score: 2.7564955518050693
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study Gaussian mechanism in the shuffle model of differential privacy
(DP). Particularly, we characterize the mechanism's R\'enyi differential
privacy (RDP), showing that it is of the form: $$ \epsilon(\lambda) \leq
\frac{1}{\lambda-1}\log\left(\frac{e^{-\lambda/2\sigma^2}}{n^\lambda}\sum_{\substack{k_1+\dotsc+k_n=\lambda;\\k_1,\dotsc,k_n\geq
0}}\binom{\lambda!}{k_1,\dotsc,k_n}e^{\sum_{i=1}^nk_i^2/2\sigma^2}\right) $$ We
further prove that the RDP is strictly upper-bounded by the Gaussian RDP
without shuffling. The shuffle Gaussian RDP is advantageous in composing
multiple DP mechanisms, where we demonstrate its improvement over the
state-of-the-art approximate DP composition theorems in privacy guarantees of
the shuffle model. Moreover, we extend our study to the subsampled shuffle
mechanism and the recently proposed shuffled check-in mechanism, which are
protocols geared towards distributed/federated learning. Finally, an empirical
study of these mechanisms is given to demonstrate the efficacy of employing
shuffle Gaussian mechanism under the distributed learning framework to
guarantee rigorous user privacy.
Related papers
- Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Unified Enhancement of Privacy Bounds for Mixture Mechanisms via
$f$-Differential Privacy [41.51051636162107]
This paper focuses on improving privacy bounds for shuffling models and one-iteration differentially private gradient descent.
We derive a closed-form expression of the trade-off function for shuffling models that outperforms the most up-to-date results.
We also study an $f$-DP analog of the advanced joint convexity of the hockey-stick divergence related to $(epsilon,delta)$-DP.
arXiv Detail & Related papers (2023-10-30T19:37:51Z) - Tractable MCMC for Private Learning with Pure and Gaussian Differential Privacy [23.12198546384976]
Posterior sampling provides $varepsilon$-pure differential privacy guarantees.
It does not suffer from potentially unbounded privacy breach introduced by $(varepsilon,delta)$-approximate DP.
In practice, however, one needs to apply approximate sampling methods such as Markov chain Monte Carlo.
arXiv Detail & Related papers (2023-10-23T07:54:39Z) - Truncated Laplace and Gaussian mechanisms of RDP [28.227024132603123]
The Laplace mechanism and the Gaussian mechanism are primary mechanisms in differential privacy.
Due to the infinite-range random variables they generate, the Laplace and Gaussian mechanisms may return values that are semantically impossible, such as negative numbers.
arXiv Detail & Related papers (2023-09-22T06:37:45Z) - The Poisson binomial mechanism for secure and private federated learning [19.399122892615573]
We introduce a discrete differential privacy mechanism for distributed mean estimation (DME) with applications to federated learning and analytics.
We provide a tight analysis of its privacy guarantees, showing that it achieves the same privacy-accuracy trade-offs as the continuous Gaussian mechanism.
arXiv Detail & Related papers (2022-07-09T05:46:28Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - Auditing Differential Privacy in High Dimensions with the Kernel Quantum
R\'enyi Divergence [29.796646032324514]
We propose relaxations of differential privacy based on new divergences on probability distributions.
We show that the regularized kernel R'enyi divergence can be estimated from samples even in high dimensions.
arXiv Detail & Related papers (2022-05-27T12:34:17Z) - The Schr\"odinger Bridge between Gaussian Measures has a Closed Form [101.79851806388699]
We focus on the dynamic formulation of OT, also known as the Schr"odinger bridge (SB) problem.
In this paper, we provide closed-form expressions for SBs between Gaussian measures.
arXiv Detail & Related papers (2022-02-11T15:59:01Z) - A unified interpretation of the Gaussian mechanism for differential
privacy through the sensitivity index [61.675604648670095]
We argue that the three prevailing interpretations of the GM, namely $(varepsilon, delta)$-DP, f-DP and R'enyi DP can be expressed by using a single parameter $psi$, which we term the sensitivity index.
$psi$ uniquely characterises the GM and its properties by encapsulating its two fundamental quantities: the sensitivity of the query and the magnitude of the noise perturbation.
arXiv Detail & Related papers (2021-09-22T06:20:01Z) - Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy
Amplification by Shuffling [49.43288037509783]
We show that random shuffling amplifies differential privacy guarantees of locally randomized data.
Our result is based on a new approach that is simpler than previous work and extends to approximate differential privacy with nearly the same guarantees.
arXiv Detail & Related papers (2020-12-23T17:07:26Z) - Implicit Distributional Reinforcement Learning [61.166030238490634]
implicit distributional actor-critic (IDAC) built on two deep generator networks (DGNs)
Semi-implicit actor (SIA) powered by a flexible policy distribution.
We observe IDAC outperforms state-of-the-art algorithms on representative OpenAI Gym environments.
arXiv Detail & Related papers (2020-07-13T02:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.