User-Level Differential Privacy With Few Examples Per User
- URL: http://arxiv.org/abs/2309.12500v1
- Date: Thu, 21 Sep 2023 21:51:55 GMT
- Title: User-Level Differential Privacy With Few Examples Per User
- Authors: Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Raghu Meka,
Chiyuan Zhang
- Abstract summary: We consider the example-scarce regime, where each user has only a few examples, and obtain the following results.
For approximate-DP, we give a generic transformation of any item-level DP algorithm to a user-level DP algorithm.
We present a simple technique for adapting the exponential mechanism [McSherry, Talwar FOCS 2007] to the user-level setting.
- Score: 73.81862394073308
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Previous work on user-level differential privacy (DP) [Ghazi et al. NeurIPS
2021, Bun et al. STOC 2023] obtained generic algorithms that work for various
learning tasks. However, their focus was on the example-rich regime, where the
users have so many examples that each user could themselves solve the problem.
In this work we consider the example-scarce regime, where each user has only a
few examples, and obtain the following results:
1. For approximate-DP, we give a generic transformation of any item-level DP
algorithm to a user-level DP algorithm. Roughly speaking, the latter gives a
(multiplicative) savings of $O_{\varepsilon,\delta}(\sqrt{m})$ in terms of the
number of users required for achieving the same utility, where $m$ is the
number of examples per user. This algorithm, while recovering most known bounds
for specific problems, also gives new bounds, e.g., for PAC learning.
2. For pure-DP, we present a simple technique for adapting the exponential
mechanism [McSherry, Talwar FOCS 2007] to the user-level setting. This gives
new bounds for a variety of tasks, such as private PAC learning, hypothesis
selection, and distribution learning. For some of these problems, we show that
our bounds are near-optimal.
Related papers
- On Computing Pairwise Statistics with Local Differential Privacy [55.81991984375959]
We study the problem of computing pairwise statistics, i.e., ones of the form $binomn2-1 sum_i ne j f(x_i, x_j)$, where $x_i$ denotes the input to the $i$th user, with differential privacy (DP) in the local model.
This formulation captures important metrics such as Kendall's $tau$ coefficient, Area Under Curve, Gini's mean difference, Gini's entropy, etc.
arXiv Detail & Related papers (2024-06-24T04:06:09Z) - Individualized Privacy Accounting via Subsampling with Applications in Combinatorial Optimization [55.81991984375959]
In this work, we give a new technique for analyzing individualized privacy accounting via the following simple observation.
We obtain several improved algorithms for private optimization problems, including decomposable submodular and set algorithm cover.
arXiv Detail & Related papers (2024-05-28T19:02:30Z) - Privately Estimating a Gaussian: Efficient, Robust and Optimal [6.901744415870126]
We give efficient algorithms for privately estimating a Gaussian distribution in both pure and approximate differential privacy (DP) models.
In the pure DP setting, we give an efficient algorithm that estimates an unknown $d$-dimensional Gaussian distribution up to an arbitrary tiny total variation error.
For the special case of mean estimation, our algorithm achieves the optimal sample complexity of $widetilde O(d)$, improving on a $widetilde O(d1.5)$ bound from prior work.
arXiv Detail & Related papers (2022-12-15T18:27:39Z) - Discrete Distribution Estimation under User-level Local Differential
Privacy [37.65849910114053]
We study discrete distribution estimation under user-level local differential privacy (LDP)
In user-level $varepsilon$-LDP, each user has $mge1$ samples and the privacy of all $m$ samples must be preserved simultaneously.
arXiv Detail & Related papers (2022-11-07T18:29:32Z) - Multi-User Reinforcement Learning with Low Rank Rewards [41.15103860230677]
Our main contribution is an algorithm which explores rewards collaboratively with $N$ user-specific MDPs.
When $N$ is large and the rank is constant, the sample complexity per MDP depends logarithmically over the size of the state-space.
arXiv Detail & Related papers (2022-10-11T11:36:21Z) - User-Level Private Learning via Correlated Sampling [49.453751858361265]
We consider the setting where each user holds $m$ samples and the privacy protection is enforced at the level of each user's data.
We show that, in this setting, we may learn with a much fewer number of users.
arXiv Detail & Related papers (2021-10-21T15:33:53Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Private Stochastic Non-Convex Optimization: Adaptive Algorithms and
Tighter Generalization Bounds [72.63031036770425]
We propose differentially private (DP) algorithms for bound non-dimensional optimization.
We demonstrate two popular deep learning methods on the empirical advantages over standard gradient methods.
arXiv Detail & Related papers (2020-06-24T06:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.