Differential Privacy at Risk: Bridging Randomness and Privacy Budget
- URL: http://arxiv.org/abs/2003.00973v2
- Date: Mon, 13 Apr 2020 23:14:27 GMT
- Title: Differential Privacy at Risk: Bridging Randomness and Privacy Budget
- Authors: Ashish Dandekar, Debabrota Basu, Stephane Bressan
- Abstract summary: We analyse roles of the sources of randomness, namely the explicit randomness induced by the noise distribution and the implicit randomness induced by the data-generation distribution.
We propose privacy at risk that is a probabilistic calibration of privacy-preserving mechanisms.
We show that composition using the cost optimal privacy at risk provides stronger privacy guarantee than the classical advanced composition.
- Score: 5.393465689287103
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The calibration of noise for a privacy-preserving mechanism depends on the
sensitivity of the query and the prescribed privacy level. A data steward must
make the non-trivial choice of a privacy level that balances the requirements
of users and the monetary constraints of the business entity. We analyse roles
of the sources of randomness, namely the explicit randomness induced by the
noise distribution and the implicit randomness induced by the data-generation
distribution, that are involved in the design of a privacy-preserving
mechanism. The finer analysis enables us to provide stronger privacy guarantees
with quantifiable risks. Thus, we propose privacy at risk that is a
probabilistic calibration of privacy-preserving mechanisms. We provide a
composition theorem that leverages privacy at risk. We instantiate the
probabilistic calibration for the Laplace mechanism by providing analytical
results. We also propose a cost model that bridges the gap between the privacy
level and the compensation budget estimated by a GDPR compliant business
entity. The convexity of the proposed cost model leads to a unique fine-tuning
of privacy level that minimises the compensation budget. We show its
effectiveness by illustrating a realistic scenario that avoids overestimation
of the compensation budget by using privacy at risk for the Laplace mechanism.
We quantitatively show that composition using the cost optimal privacy at risk
provides stronger privacy guarantee than the classical advanced composition.
Related papers
- Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - Private Optimal Inventory Policy Learning for Feature-based Newsvendor with Unknown Demand [13.594765018457904]
This paper introduces a novel approach to estimate a privacy-preserving optimal inventory policy within the f-differential privacy framework.
We develop a clipped noisy gradient descent algorithm based on convolution smoothing for optimal inventory estimation.
Our numerical experiments demonstrate that the proposed new method can achieve desirable privacy protection with a marginal increase in cost.
arXiv Detail & Related papers (2024-04-23T19:15:43Z) - Chained-DP: Can We Recycle Privacy Budget? [18.19895364709435]
We propose a novel Chained-DP framework enabling users to carry out data aggregation sequentially to recycle the privacy budget.
We show the mathematical nature of the sequential game, solve its Nash Equilibrium, and design an incentive mechanism with provable economic properties.
Our numerical simulation validates the effectiveness of Chained-DP, showing that it can significantly save privacy budget and lower estimation error compared to the traditional LDP mechanism.
arXiv Detail & Related papers (2023-09-12T08:07:59Z) - Prior-itizing Privacy: A Bayesian Approach to Setting the Privacy Budget in Differential Privacy [0.3683202928838613]
We provide a framework for setting $varepsilon$ based on its relationship with Bayesian posterior probabilities of disclosure.
The agency responsible for the data release decides how much posterior risk it is willing to accept at various levels of prior risk.
arXiv Detail & Related papers (2023-06-19T22:23:09Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Decentralized Matrix Factorization with Heterogeneous Differential
Privacy [2.4743508801114444]
We propose a novel Heterogeneous Differentially Private Matrix Factorization algorithm (denoted as HDPMF) for untrusted recommender.
Our framework uses modified stretching mechanism with an innovative rescaling scheme to achieve better trade off between privacy and accuracy.
arXiv Detail & Related papers (2022-12-01T06:48:18Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Brownian Noise Reduction: Maximizing Privacy Subject to Accuracy
Constraints [53.01656650117495]
There is a disconnect between how researchers and practitioners handle privacy-utility tradeoffs.
Brownian mechanism works by first adding Gaussian noise of high variance corresponding to the final point of a simulated Brownian motion.
We complement our Brownian mechanism with ReducedAboveThreshold, a generalization of the classical AboveThreshold algorithm.
arXiv Detail & Related papers (2022-06-15T01:43:37Z) - Optimal and Differentially Private Data Acquisition: Central and Local
Mechanisms [9.599356978682108]
We consider a platform's problem of collecting data from privacy sensitive users to estimate an underlying parameter of interest.
We consider two popular differential privacy settings for providing privacy guarantees for the users: central and local.
We pose the mechanism design problem as the optimal selection of an estimator and payments that will elicit truthful reporting of users' privacy sensitivities.
arXiv Detail & Related papers (2022-01-10T00:27:43Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.