Budget Recycling Differential Privacy
- URL: http://arxiv.org/abs/2403.11445v4
- Date: Fri, 12 Jul 2024 16:46:17 GMT
- Title: Budget Recycling Differential Privacy
- Authors: Bo Jiang, Jian Du, Sagar Sharma, Qiang Yan,
- Abstract summary: Budget Recycling Differential Privacy (BR-DP) framework designed to provide soft-bounded noisy outputs for a broad range of DP mechanisms.
BR-DP consists of two components: a DP kernel responsible for generating a noisy answer per iteration, and a recycler that probabilistically recycles/regenerates or releases the noisy answer.
- Score: 5.603278701712346
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Differential Privacy (DP) mechanisms usually {force} reduction in data utility by producing "out-of-bound" noisy results for a tight privacy budget. We introduce the Budget Recycling Differential Privacy (BR-DP) framework, designed to provide soft-bounded noisy outputs for a broad range of existing DP mechanisms. By "soft-bounded," we refer to the mechanism's ability to release most outputs within a predefined error boundary, thereby improving utility and maintaining privacy simultaneously. The core of BR-DP consists of two components: a DP kernel responsible for generating a noisy answer per iteration, and a recycler that probabilistically recycles/regenerates or releases the noisy answer. We delve into the privacy accounting of BR-DP, culminating in the development of a budgeting principle that optimally sub-allocates the available budget between the DP kernel and the recycler. Furthermore, we introduce algorithms for tight BR-DP accounting in composition scenarios, and our findings indicate that BR-DP achieves reduced privacy leakage post-composition compared to DP. Additionally, we explore the concept of privacy amplification via subsampling within the BR-DP framework and propose optimal sampling rates for BR-DP across various queries. We experiment with real data, and the results demonstrate BR-DP's effectiveness in lifting the utility-privacy tradeoff provided by DP mechanisms.
Related papers
- Meeting Utility Constraints in Differential Privacy: A Privacy-Boosting Approach [7.970280110429423]
We propose a privacy-boosting framework that is compatible with most noise-adding DP mechanisms.
Our framework enhances the likelihood of outputs falling within a preferred subset of the support to meet utility requirements.
We show that our framework achieves lower privacy loss than standard DP mechanisms under utility constraints.
arXiv Detail & Related papers (2024-12-13T23:34:30Z) - Correlated Privacy Mechanisms for Differentially Private Distributed Mean Estimation [8.660393575612169]
Local differential privacy (LDP) and distributed DP with secure aggregation (SA) are the most common notions of DP used in DP-DME settings with an untrusted server.
LDP provides strong resilience to dropouts, colluding users, and adversarial attacks, but suffers from poor utility.
In this work, we present a generalized framework for DP-DME, that captures LDP and SA-based mechanisms as extreme cases.
arXiv Detail & Related papers (2024-07-03T17:22:33Z) - Provable Privacy with Non-Private Pre-Processing [56.770023668379615]
We propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms.
Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions.
arXiv Detail & Related papers (2024-03-19T17:54:49Z) - Privacy Amplification for the Gaussian Mechanism via Bounded Support [64.86780616066575]
Data-dependent privacy accounting frameworks such as per-instance differential privacy (pDP) and Fisher information loss (FIL) confer fine-grained privacy guarantees for individuals in a fixed training dataset.
We propose simple modifications of the Gaussian mechanism with bounded support, showing that they amplify privacy guarantees under data-dependent accounting.
arXiv Detail & Related papers (2024-03-07T21:22:07Z) - Unified Mechanism-Specific Amplification by Subsampling and Group Privacy Amplification [54.1447806347273]
Amplification by subsampling is one of the main primitives in machine learning with differential privacy.
We propose the first general framework for deriving mechanism-specific guarantees.
We analyze how subsampling affects the privacy of groups of multiple users.
arXiv Detail & Related papers (2024-03-07T19:36:05Z) - Chained-DP: Can We Recycle Privacy Budget? [18.19895364709435]
We propose a novel Chained-DP framework enabling users to carry out data aggregation sequentially to recycle the privacy budget.
We show the mathematical nature of the sequential game, solve its Nash Equilibrium, and design an incentive mechanism with provable economic properties.
Our numerical simulation validates the effectiveness of Chained-DP, showing that it can significantly save privacy budget and lower estimation error compared to the traditional LDP mechanism.
arXiv Detail & Related papers (2023-09-12T08:07:59Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Differentially Private Regret Minimization in Episodic Markov Decision
Processes [6.396288020763144]
We study regret in finite horizon tabular Markov decision processes (MDPs) under the constraints of differential privacy (DP)
This is motivated by the widespread applications of reinforcement learning (RL) in real-world sequential decision making problems.
arXiv Detail & Related papers (2021-12-20T15:12:23Z) - Differentially Private Federated Bayesian Optimization with Distributed
Exploration [48.9049546219643]
We introduce differential privacy (DP) into the training of deep neural networks through a general framework for adding DP to iterative algorithms.
We show that DP-FTS-DE achieves high utility (competitive performance) with a strong privacy guarantee.
We also use real-world experiments to show that DP-FTS-DE induces a trade-off between privacy and utility.
arXiv Detail & Related papers (2021-10-27T04:11:06Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z) - Three Variants of Differential Privacy: Lossless Conversion and
Applications [13.057076084452016]
We consider three different variants of differential privacy (DP), namely approximate DP, R'enyi RDP, and hypothesis test.
In the first part, we develop a machinery for relating approximate DP to iterations based on the joint range of two $f$-divergences.
As an application, we apply our result to the moments framework for characterizing privacy guarantees of noisy gradient descent.
arXiv Detail & Related papers (2020-08-14T18:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.