About the Cost of Central Privacy in Density Estimation
- URL: http://arxiv.org/abs/2306.14535v4
- Date: Wed, 18 Sep 2024 08:46:04 GMT
- Title: About the Cost of Central Privacy in Density Estimation
- Authors: Clément Lalanne, Aurélien Garivier, Rémi Gribonval,
- Abstract summary: We study non-parametric density estimation for densities in Lipschitz and Sobolev spaces.
We consider regimes where the privacy budget is not supposed to be constant.
- Score: 17.04261371990489
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study non-parametric density estimation for densities in Lipschitz and Sobolev spaces, and under central privacy. In particular, we investigate regimes where the privacy budget is not supposed to be constant. We consider the classical definition of central differential privacy, but also the more recent notion of central concentrated differential privacy. We recover the result of Barber and Duchi (2014) stating that histogram estimators are optimal against Lipschitz distributions for the L2 risk, and under regular differential privacy, and we extend it to other norms and notions of privacy. Then, we investigate higher degrees of smoothness, drawing two conclusions: First, and contrary to what happens with constant privacy budget (Wasserman and Zhou, 2010), there are regimes where imposing privacy degrades the regular minimax risk of estimation on Sobolev densities. Second, so-called projection estimators are near-optimal against the same classes of densities in this new setup with pure differential privacy, but contrary to the constant privacy budget case, it comes at the cost of relaxation. With zero concentrated differential privacy, there is no need for relaxation, and we prove that the estimation is optimal.
Related papers
- Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - Mean Estimation Under Heterogeneous Privacy Demands [5.755004576310333]
This work considers the problem of mean estimation, where each user can impose their own privacy level.
The algorithm we propose is shown to be minimax optimal and has a near-linear run-time.
Users with less but differing privacy requirements are all given more privacy than they require, in equal amounts.
arXiv Detail & Related papers (2023-10-19T20:29:19Z) - Mean Estimation Under Heterogeneous Privacy: Some Privacy Can Be Free [13.198689566654103]
This work considers the problem of mean estimation under heterogeneous Differential Privacy constraints.
The algorithm we propose is shown to be minimax optimal when there are two groups of users with distinct privacy levels.
arXiv Detail & Related papers (2023-04-27T05:23:06Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Echo of Neighbors: Privacy Amplification for Personalized Private
Federated Learning with Shuffle Model [21.077469463027306]
Federated Learning, as a popular paradigm for collaborative training, is vulnerable to privacy attacks.
This work builds up to strengthen model privacy under personalized local privacy by leveraging the privacy amplification effect of the shuffle model.
To the best of our knowledge, the impact of shuffling on personalized local privacy is considered for the first time.
arXiv Detail & Related papers (2023-04-11T21:48:42Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Optimal and Differentially Private Data Acquisition: Central and Local
Mechanisms [9.599356978682108]
We consider a platform's problem of collecting data from privacy sensitive users to estimate an underlying parameter of interest.
We consider two popular differential privacy settings for providing privacy guarantees for the users: central and local.
We pose the mechanism design problem as the optimal selection of an estimator and payments that will elicit truthful reporting of users' privacy sensitivities.
arXiv Detail & Related papers (2022-01-10T00:27:43Z) - Privacy Amplification via Shuffling for Linear Contextual Bandits [51.94904361874446]
We study the contextual linear bandit problem with differential privacy (DP)
We show that it is possible to achieve a privacy/utility trade-off between JDP and LDP by leveraging the shuffle model of privacy.
Our result shows that it is possible to obtain a tradeoff between JDP and LDP by leveraging the shuffle model while preserving local privacy.
arXiv Detail & Related papers (2021-12-11T15:23:28Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - Differential Privacy at Risk: Bridging Randomness and Privacy Budget [5.393465689287103]
We analyse roles of the sources of randomness, namely the explicit randomness induced by the noise distribution and the implicit randomness induced by the data-generation distribution.
We propose privacy at risk that is a probabilistic calibration of privacy-preserving mechanisms.
We show that composition using the cost optimal privacy at risk provides stronger privacy guarantee than the classical advanced composition.
arXiv Detail & Related papers (2020-03-02T15:44:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.