Count on Your Elders: Laplace vs Gaussian Noise
- URL: http://arxiv.org/abs/2408.07021v3
- Date: Mon, 18 Nov 2024 15:40:25 GMT
- Title: Count on Your Elders: Laplace vs Gaussian Noise
- Authors: Joel Daniel Andersson, Rasmus Pagh, Teresa Anna Steiner, Sahel Torkamani,
- Abstract summary: We argue that Laplace noise may in fact be preferable to Gaussian noise in many settings.
We show that the noise added by the Gaussian mechanism can always be replaced by Laplace noise of comparable variance.
This challenges the conventional wisdom that Gaussian noise should be used for high-dimensional noise.
- Score: 9.546521474972485
- License:
- Abstract: In recent years, Gaussian noise has become a popular tool in differentially private algorithms, often replacing Laplace noise which dominated the early literature. Gaussian noise is the standard approach to $\textit{approximate}$ differential privacy, often resulting in much higher utility than traditional (pure) differential privacy mechanisms. In this paper we argue that Laplace noise may in fact be preferable to Gaussian noise in many settings, in particular for $(\varepsilon,\delta)$-differential privacy when $\delta$ is small. We consider two scenarios: First, we consider the problem of counting under continual observation and present a new generalization of the binary tree mechanism that uses a $k$-ary number system with $\textit{negative digits}$ to improve the privacy-accuracy trade-off. Our mechanism uses Laplace noise and whenever $\delta$ is sufficiently small it improves the mean squared error over the best possible $(\varepsilon,\delta)$-differentially private factorization mechanisms based on Gaussian noise. Specifically, using $k=19$ we get an asymptotic improvement over the bound given in the work by Henzinger, Upadhyay and Upadhyay (SODA 2023) when $\delta = O(T^{-0.92})$. Second, we show that the noise added by the Gaussian mechanism can always be replaced by Laplace noise of comparable variance for the same $(\epsilon, \delta)$-differential privacy guarantee, and in fact for sufficiently small $\delta$ the variance of the Laplace noise becomes strictly better. This challenges the conventional wisdom that Gaussian noise should be used for high-dimensional noise. Finally, we study whether counting under continual observation may be easier in an average-case sense. We show that, under pure differential privacy, the expected worst-case error for a random input must be $\Omega(\log(T)/\varepsilon)$, matching the known lower bound for worst-case inputs.
Related papers
- Better Gaussian Mechanism using Correlated Noise [1.450405446885067]
We show that adding a random variable distributed as a Gaussian with variance scaled by $(sqrtd + 1)/4$ to all counts allows us to reduce the variance of the independent noise samples to scale only with $(d + sqrtd)/4$.
The central idea of our mechanism is simple and the technique is flexible.
arXiv Detail & Related papers (2024-08-13T12:31:03Z) - Perturb-and-Project: Differentially Private Similarities and Marginals [73.98880839337873]
We revisit the input perturbations framework for differential privacy where noise is added to the input $Ain mathcalS$.
We first design novel efficient algorithms to privately release pair-wise cosine similarities.
We derive a novel algorithm to compute $k$-way marginal queries over $n$ features.
arXiv Detail & Related papers (2024-06-07T12:07:16Z) - Some Constructions of Private, Efficient, and Optimal $K$-Norm and Elliptic Gaussian Noise [54.34628844260993]
Differentially private computation often begins with a bound on some $d$-dimensional statistic's sensitivity.
For pure differential privacy, the $K$-norm mechanism can improve on this approach using a norm tailored to the statistic's sensitivity space.
This paper solves both problems for the simple statistics of sum, count, and vote.
arXiv Detail & Related papers (2023-09-27T17:09:36Z) - General Gaussian Noise Mechanisms and Their Optimality for Unbiased Mean
Estimation [58.03500081540042]
A classical approach to private mean estimation is to compute the true mean and add unbiased, but possibly correlated, Gaussian noise to it.
We show that for every input dataset, an unbiased mean estimator satisfying concentrated differential privacy introduces approximately at least as much error.
arXiv Detail & Related papers (2023-01-31T18:47:42Z) - Concurrent Shuffle Differential Privacy Under Continual Observation [60.127179363119936]
We study the private continual summation problem (a.k.a. the counter problem)
We show that the concurrent shuffle model allows for significantly improved error compared to a standard (single) shuffle model.
arXiv Detail & Related papers (2023-01-29T20:42:54Z) - Variance-Aware Sparse Linear Bandits [64.70681598741417]
Worst-case minimax regret for sparse linear bandits is $widetildeThetaleft(sqrtdTright)$.
In the benign setting where there is no noise and the action set is the unit sphere, one can use divide-and-conquer to achieve an $widetildemathcal O(1)$ regret.
We develop a general framework that converts any variance-aware linear bandit algorithm to a variance-aware algorithm for sparse linear bandits.
arXiv Detail & Related papers (2022-05-26T15:55:44Z) - Differential privacy for symmetric log-concave mechanisms [0.0]
Adding random noise to database query results is an important tool for achieving privacy.
We provide a sufficient and necessary condition for $(epsilon, delta)$-differential privacy for all symmetric and log-concave noise densities.
arXiv Detail & Related papers (2022-02-23T10:20:29Z) - Infinitely Divisible Noise in the Low Privacy Regime [9.39772079241093]
Federated learning, in which training data is distributed among users and never shared, has emerged as a popular approach to privacy-preserving machine learning.
We present the first divisible infinitely noise distribution for real-valued data that achieves $varepsilon$-differential privacy.
arXiv Detail & Related papers (2021-10-13T08:16:43Z) - Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy
Amplification by Shuffling [49.43288037509783]
We show that random shuffling amplifies differential privacy guarantees of locally randomized data.
Our result is based on a new approach that is simpler than previous work and extends to approximate differential privacy with nearly the same guarantees.
arXiv Detail & Related papers (2020-12-23T17:07:26Z) - A bounded-noise mechanism for differential privacy [3.9596068699962323]
We output an approximation of the average $frac1nsum_i=1n vecx(i)$ of vectors $vecx(i) in [0,1]k$, while preserving the privacy with respect to any $vecx(i)$.
We present an $(epsilon,delta)$-private mechanism with optimal $ell_infty$ error for most values of $delta$.
arXiv Detail & Related papers (2020-12-07T16:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.