Chained-DP: Can We Recycle Privacy Budget?
- URL: http://arxiv.org/abs/2309.07075v3
- Date: Tue, 19 Sep 2023 01:54:24 GMT
- Title: Chained-DP: Can We Recycle Privacy Budget?
- Authors: Jingyi Li, Guangjing Huang, Liekang Zeng, Lin Chen, Xu Chen,
- Abstract summary: We propose a novel Chained-DP framework enabling users to carry out data aggregation sequentially to recycle the privacy budget.
We show the mathematical nature of the sequential game, solve its Nash Equilibrium, and design an incentive mechanism with provable economic properties.
Our numerical simulation validates the effectiveness of Chained-DP, showing that it can significantly save privacy budget and lower estimation error compared to the traditional LDP mechanism.
- Score: 18.19895364709435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Privacy-preserving vector mean estimation is a crucial primitive in federated analytics. Existing practices usually resort to Local Differentiated Privacy (LDP) mechanisms that inject random noise into users' vectors when communicating with users and the central server. Due to the privacy-utility trade-off, the privacy budget has been widely recognized as the bottleneck resource that requires well-provisioning. In this paper, we explore the possibility of privacy budget recycling and propose a novel Chained-DP framework enabling users to carry out data aggregation sequentially to recycle the privacy budget. We establish a sequential game to model the user interactions in our framework. We theoretically show the mathematical nature of the sequential game, solve its Nash Equilibrium, and design an incentive mechanism with provable economic properties. We further derive a differentially privacy-guaranteed protocol to alleviate potential privacy collusion attacks to avoid holistic exposure. Our numerical simulation validates the effectiveness of Chained-DP, showing that it can significantly save privacy budget and lower estimation error compared to the traditional LDP mechanism.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Privacy Amplification for the Gaussian Mechanism via Bounded Support [64.86780616066575]
Data-dependent privacy accounting frameworks such as per-instance differential privacy (pDP) and Fisher information loss (FIL) confer fine-grained privacy guarantees for individuals in a fixed training dataset.
We propose simple modifications of the Gaussian mechanism with bounded support, showing that they amplify privacy guarantees under data-dependent accounting.
arXiv Detail & Related papers (2024-03-07T21:22:07Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Rethinking Disclosure Prevention with Pointwise Maximal Leakage [36.3895452861944]
We propose a general model of utility and privacy in which utility is achieved by disclosing the value of low-entropy features of a secret $X$.
We prove that, contrary to popular opinion, it is possible to provide meaningful inferential privacy guarantees.
We show that PML-based privacy is compatible with and provides insights into existing notions such as differential privacy.
arXiv Detail & Related papers (2023-03-14T10:47:40Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Optimal and Differentially Private Data Acquisition: Central and Local
Mechanisms [9.599356978682108]
We consider a platform's problem of collecting data from privacy sensitive users to estimate an underlying parameter of interest.
We consider two popular differential privacy settings for providing privacy guarantees for the users: central and local.
We pose the mechanism design problem as the optimal selection of an estimator and payments that will elicit truthful reporting of users' privacy sensitivities.
arXiv Detail & Related papers (2022-01-10T00:27:43Z) - Privacy Amplification via Shuffling for Linear Contextual Bandits [51.94904361874446]
We study the contextual linear bandit problem with differential privacy (DP)
We show that it is possible to achieve a privacy/utility trade-off between JDP and LDP by leveraging the shuffle model of privacy.
Our result shows that it is possible to obtain a tradeoff between JDP and LDP by leveraging the shuffle model while preserving local privacy.
arXiv Detail & Related papers (2021-12-11T15:23:28Z) - Generalized Linear Bandits with Local Differential Privacy [4.922800530841394]
Many applications such as personalized medicine and online advertising require the utilization of individual-specific information for effective learning.
This motivates the introduction of local differential privacy (LDP), a stringent notion in privacy, to contextual bandits.
In this paper, we design LDP algorithms for generalized linear bandits to achieve the same regret bound as in non-privacy settings.
arXiv Detail & Related papers (2021-06-07T06:42:00Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - Differential Privacy at Risk: Bridging Randomness and Privacy Budget [5.393465689287103]
We analyse roles of the sources of randomness, namely the explicit randomness induced by the noise distribution and the implicit randomness induced by the data-generation distribution.
We propose privacy at risk that is a probabilistic calibration of privacy-preserving mechanisms.
We show that composition using the cost optimal privacy at risk provides stronger privacy guarantee than the classical advanced composition.
arXiv Detail & Related papers (2020-03-02T15:44:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.