Optimal Algorithms for Mean Estimation under Local Differential Privacy
- URL: http://arxiv.org/abs/2205.02466v1
- Date: Thu, 5 May 2022 06:43:46 GMT
- Title: Optimal Algorithms for Mean Estimation under Local Differential Privacy
- Authors: Hilal Asi, Vitaly Feldman, Kunal Talwar
- Abstract summary: We show that PrivUnit achieves the optimal variance among a large family of locally private randomizers.
We also develop a new variant of PrivUnit based on the Gaussian distribution which is more amenable to mathematical analysis and enjoys the same optimality guarantees.
- Score: 55.32262879188817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of mean estimation of $\ell_2$-bounded vectors under the
constraint of local differential privacy. While the literature has a variety of
algorithms that achieve the asymptotically optimal rates for this problem, the
performance of these algorithms in practice can vary significantly due to
varying (and often large) hidden constants. In this work, we investigate the
question of designing the protocol with the smallest variance. We show that
PrivUnit (Bhowmick et al. 2018) with optimized parameters achieves the optimal
variance among a large family of locally private randomizers. To prove this
result, we establish some properties of local randomizers, and use
symmetrization arguments that allow us to write the optimal randomizer as the
optimizer of a certain linear program. These structural results, which should
extend to other problems, then allow us to show that the optimal randomizer
belongs to the PrivUnit family.
We also develop a new variant of PrivUnit based on the Gaussian distribution
which is more amenable to mathematical analysis and enjoys the same optimality
guarantees. This allows us to establish several useful properties on the exact
constants of the optimal error as well as to numerically estimate these
constants.
Related papers
- Differential Privacy with Multiple Selections [52.5653572324012]
We consider the setting where a user with sensitive features wishes to obtain a recommendation from a server in a differentially private fashion.
We propose a multi-selection'' architecture where the server can send back multiple recommendations and the user chooses one that matches best with their private features.
arXiv Detail & Related papers (2024-07-19T19:34:51Z) - Some Constructions of Private, Efficient, and Optimal $K$-Norm and Elliptic Gaussian Noise [54.34628844260993]
Differentially private computation often begins with a bound on some $d$-dimensional statistic's sensitivity.
For pure differential privacy, the $K$-norm mechanism can improve on this approach using a norm tailored to the statistic's sensitivity space.
This paper solves both problems for the simple statistics of sum, count, and vote.
arXiv Detail & Related papers (2023-09-27T17:09:36Z) - Fast Optimal Locally Private Mean Estimation via Random Projections [58.603579803010796]
We study the problem of locally private mean estimation of high-dimensional vectors in the Euclidean ball.
We propose a new algorithmic framework, ProjUnit, for private mean estimation.
Our framework is deceptively simple: each randomizer projects its input to a random low-dimensional subspace, normalizes the result, and then runs an optimal algorithm.
arXiv Detail & Related papers (2023-06-07T14:07:35Z) - General Gaussian Noise Mechanisms and Their Optimality for Unbiased Mean
Estimation [58.03500081540042]
A classical approach to private mean estimation is to compute the true mean and add unbiased, but possibly correlated, Gaussian noise to it.
We show that for every input dataset, an unbiased mean estimator satisfying concentrated differential privacy introduces approximately at least as much error.
arXiv Detail & Related papers (2023-01-31T18:47:42Z) - Optimal Compression of Locally Differentially Private Mechanisms [21.200464908282594]
We demonstrate the benefits of schemes that jointly compress and privatize the data using shared randomness.
Our theoretical and empirical findings show that our approach can compress PrivUnit (Bhowmick et al., Coding and Subset Selection (Ye et al., the best known LDP algorithms for mean and frequency estimation, to the order of epsilon-bits of communication, while preserving their privacy and accuracy guarantees, to the order of epsilon-bits of communication, to the order of epsilon-bits of communication, to the order of epsilon-bits of communication, to the
arXiv Detail & Related papers (2021-10-29T21:36:34Z) - No-Regret Algorithms for Private Gaussian Process Bandit Optimization [13.660643701487002]
We consider the ubiquitous problem of gaussian process (GP) bandit optimization from the lens of privacy-preserving statistics.
We propose a solution for differentially private GP bandit optimization that combines a uniform kernel approximator with random perturbations.
Our algorithms maintain differential privacy throughout the optimization procedure and critically do not rely explicitly on the sample path for prediction.
arXiv Detail & Related papers (2021-02-24T18:52:24Z) - Projection-Free Bandit Optimization with Privacy Guarantees [18.95253453749389]
We design differentially private algorithms for the bandit convex optimization problem in the projection-free setting.
This setting is important whenever the decision set has a complex geometry, and access to it is done efficiently only through a linear optimization oracle.
This is the first differentially-private algorithm for projection-free bandit optimization, and in fact our bound of $widetildeO(T3/4)$ matches the best known non-private projection-free algorithm.
arXiv Detail & Related papers (2020-12-22T16:19:29Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z) - Private Stochastic Convex Optimization: Efficient Algorithms for
Non-smooth Objectives [28.99826590351627]
We propose an algorithm based on noisy mirror which achieves a first-order descent, inversely in the regime when the privacy parameter is proportional to the number of samples.
arXiv Detail & Related papers (2020-02-22T03:03:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.