User-Level Private Learning via Correlated Sampling
- URL: http://arxiv.org/abs/2110.11208v1
- Date: Thu, 21 Oct 2021 15:33:53 GMT
- Title: User-Level Private Learning via Correlated Sampling
- Authors: Badih Ghazi, Ravi Kumar, Pasin Manurangsi
- Abstract summary: We consider the setting where each user holds $m$ samples and the privacy protection is enforced at the level of each user's data.
We show that, in this setting, we may learn with a much fewer number of users.
- Score: 49.453751858361265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most works in learning with differential privacy (DP) have focused on the
setting where each user has a single sample. In this work, we consider the
setting where each user holds $m$ samples and the privacy protection is
enforced at the level of each user's data. We show that, in this setting, we
may learn with a much fewer number of users. Specifically, we show that, as
long as each user receives sufficiently many samples, we can learn any
privately learnable class via an $(\epsilon, \delta)$-DP algorithm using only
$O(\log(1/\delta)/\epsilon)$ users. For $\epsilon$-DP algorithms, we show that
we can learn using only $O_{\epsilon}(d)$ users even in the local model, where
$d$ is the probabilistic representation dimension. In both cases, we show a
nearly-matching lower bound on the number of users required.
A crucial component of our results is a generalization of global stability
[Bun et al., FOCS 2020] that allows the use of public randomness. Under this
relaxed notion, we employ a correlated sampling strategy to show that the
global stability can be boosted to be arbitrarily close to one, at a polynomial
expense in the number of samples.
Related papers
- User-Level Differential Privacy With Few Examples Per User [73.81862394073308]
We consider the example-scarce regime, where each user has only a few examples, and obtain the following results.
For approximate-DP, we give a generic transformation of any item-level DP algorithm to a user-level DP algorithm.
We present a simple technique for adapting the exponential mechanism [McSherry, Talwar FOCS 2007] to the user-level setting.
arXiv Detail & Related papers (2023-09-21T21:51:55Z) - Continual Mean Estimation Under User-Level Privacy [21.513703308495774]
We consider the problem of continually releasing an estimate of the population mean of a stream of samples that is user-level differentially private (DP)
We provide an algorithm that outputs a mean estimate at every time instant $t$ such that the overall release is user-level $varepsilon$-DP.
arXiv Detail & Related papers (2022-12-20T03:44:25Z) - Discrete Distribution Estimation under User-level Local Differential
Privacy [37.65849910114053]
We study discrete distribution estimation under user-level local differential privacy (LDP)
In user-level $varepsilon$-LDP, each user has $mge1$ samples and the privacy of all $m$ samples must be preserved simultaneously.
arXiv Detail & Related papers (2022-11-07T18:29:32Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Learning discrete distributions: user vs item-level privacy [47.05234904407048]
Recently many practical applications such as federated learning require preserving privacy for all items of a single user.
We study the fundamental problem of learning discrete distributions over $k$ symbols with user-level differential privacy.
We propose a mechanism such that the number of users scales as $tildemathcalO(k/(malpha2) + k/sqrtmepsilonalpha)$ and hence the privacy penalty is $tildeTheta(sqrtm)$ times smaller compared to the standard mechanisms.
arXiv Detail & Related papers (2020-07-27T16:15:14Z) - Private Query Release Assisted by Public Data [96.6174729958211]
We study the problem of differentially private query release assisted by access to public data.
We show that we can solve the problem for any query class $mathcalH$ of finite VC-dimension using only $d/alpha$ public samples and $sqrtpd3/2/alpha2$ private samples.
arXiv Detail & Related papers (2020-04-23T02:46:37Z) - Locally Private Hypothesis Selection [96.06118559817057]
We output a distribution from $mathcalQ$ whose total variation distance to $p$ is comparable to the best such distribution.
We show that the constraint of local differential privacy incurs an exponential increase in cost.
Our algorithms result in exponential improvements on the round complexity of previous methods.
arXiv Detail & Related papers (2020-02-21T18:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.