Learning discrete distributions: user vs item-level privacy
- URL: http://arxiv.org/abs/2007.13660v3
- Date: Mon, 11 Jan 2021 22:15:07 GMT
- Title: Learning discrete distributions: user vs item-level privacy
- Authors: Yuhan Liu, Ananda Theertha Suresh, Felix Yu, Sanjiv Kumar, Michael
Riley
- Abstract summary: Recently many practical applications such as federated learning require preserving privacy for all items of a single user.
We study the fundamental problem of learning discrete distributions over $k$ symbols with user-level differential privacy.
We propose a mechanism such that the number of users scales as $tildemathcalO(k/(malpha2) + k/sqrtmepsilonalpha)$ and hence the privacy penalty is $tildeTheta(sqrtm)$ times smaller compared to the standard mechanisms.
- Score: 47.05234904407048
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Much of the literature on differential privacy focuses on item-level privacy,
where loosely speaking, the goal is to provide privacy per item or training
example. However, recently many practical applications such as federated
learning require preserving privacy for all items of a single user, which is
much harder to achieve. Therefore understanding the theoretical limit of
user-level privacy becomes crucial.
We study the fundamental problem of learning discrete distributions over $k$
symbols with user-level differential privacy. If each user has $m$ samples, we
show that straightforward applications of Laplace or Gaussian mechanisms
require the number of users to be $\mathcal{O}(k/(m\alpha^2) +
k/\epsilon\alpha)$ to achieve an $\ell_1$ distance of $\alpha$ between the true
and estimated distributions, with the privacy-induced penalty
$k/\epsilon\alpha$ independent of the number of samples per user $m$. Moreover,
we show that any mechanism that only operates on the final aggregate counts
should require a user complexity of the same order. We then propose a mechanism
such that the number of users scales as $\tilde{\mathcal{O}}(k/(m\alpha^2) +
k/\sqrt{m}\epsilon\alpha)$ and hence the privacy penalty is
$\tilde{\Theta}(\sqrt{m})$ times smaller compared to the standard mechanisms in
certain settings of interest. We further show that the proposed mechanism is
nearly-optimal under certain regimes.
We also propose general techniques for obtaining lower bounds on restricted
differentially private estimators and a lower bound on the total variation
between binomial distributions, both of which might be of independent interest.
Related papers
- User-Level Differential Privacy With Few Examples Per User [73.81862394073308]
We consider the example-scarce regime, where each user has only a few examples, and obtain the following results.
For approximate-DP, we give a generic transformation of any item-level DP algorithm to a user-level DP algorithm.
We present a simple technique for adapting the exponential mechanism [McSherry, Talwar FOCS 2007] to the user-level setting.
arXiv Detail & Related papers (2023-09-21T21:51:55Z) - Differentially-Private Bayes Consistency [70.92545332158217]
We construct a Bayes consistent learning rule that satisfies differential privacy (DP)
We prove that any VC class can be privately learned in a semi-supervised setting with a near-optimal sample complexity.
arXiv Detail & Related papers (2022-12-08T11:57:30Z) - Discrete Distribution Estimation under User-level Local Differential
Privacy [37.65849910114053]
We study discrete distribution estimation under user-level local differential privacy (LDP)
In user-level $varepsilon$-LDP, each user has $mge1$ samples and the privacy of all $m$ samples must be preserved simultaneously.
arXiv Detail & Related papers (2022-11-07T18:29:32Z) - Robust Estimation of Discrete Distributions under Local Differential
Privacy [1.52292571922932]
We consider the problem of estimating a discrete distribution in total variation from $n$ contaminated data batches under a local differential privacy constraint.
We show that combining the two constraints leads to a minimax estimation rate of $epsilonsqrtd/alpha2 k+sqrtd2/alpha2 kn$ up to a $sqrtlog (1/epsilon)$ factor.
arXiv Detail & Related papers (2022-02-14T15:59:02Z) - Tight and Robust Private Mean Estimation with Few Users [16.22135057266913]
We study high-dimensional mean estimation under user-level differential privacy.
We design an $(eps,delta)$-differentially private mechanism using as few users as possible.
arXiv Detail & Related papers (2021-10-22T16:02:21Z) - User-Level Private Learning via Correlated Sampling [49.453751858361265]
We consider the setting where each user holds $m$ samples and the privacy protection is enforced at the level of each user's data.
We show that, in this setting, we may learn with a much fewer number of users.
arXiv Detail & Related papers (2021-10-21T15:33:53Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Locally Private Hypothesis Selection [96.06118559817057]
We output a distribution from $mathcalQ$ whose total variation distance to $p$ is comparable to the best such distribution.
We show that the constraint of local differential privacy incurs an exponential increase in cost.
Our algorithms result in exponential improvements on the round complexity of previous methods.
arXiv Detail & Related papers (2020-02-21T18:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.