Rethinking Disclosure Prevention with Pointwise Maximal Leakage
- URL: http://arxiv.org/abs/2303.07782v2
- Date: Fri, 25 Oct 2024 09:57:47 GMT
- Title: Rethinking Disclosure Prevention with Pointwise Maximal Leakage
- Authors: Sara Saeidian, Giulia Cervia, Tobias J. Oechtering, Mikael Skoglund,
- Abstract summary: We propose a general model of utility and privacy in which utility is achieved by disclosing the value of low-entropy features of a secret $X$.
We prove that, contrary to popular opinion, it is possible to provide meaningful inferential privacy guarantees.
We show that PML-based privacy is compatible with and provides insights into existing notions such as differential privacy.
- Score: 36.3895452861944
- License:
- Abstract: This paper introduces a paradigm shift in the way privacy is defined, driven by a novel interpretation of the fundamental result of Dwork and Naor about the impossibility of absolute disclosure prevention. We propose a general model of utility and privacy in which utility is achieved by disclosing the value of low-entropy features of a secret $X$, while privacy is maintained by hiding the value of high-entropy features of $X$. Adopting this model, we prove that, contrary to popular opinion, it is possible to provide meaningful inferential privacy guarantees. These guarantees are given in terms of an operationally-meaningful information measure called pointwise maximal leakage (PML) and prevent privacy breaches against a large class of adversaries regardless of their prior beliefs about $X$. We show that PML-based privacy is compatible with and provides insights into existing notions such as differential privacy. We also argue that our new framework enables highly flexible mechanism designs, where the randomness of a mechanism can be adjusted to the entropy of the data, ultimately, leading to higher utility.
Related papers
- Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective [57.35402286842029]
Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
arXiv Detail & Related papers (2024-08-28T08:22:21Z) - Information Density Bounds for Privacy [36.919956584905236]
This paper explores the implications of guaranteeing privacy by imposing a lower bound on the information density between the private and the public data.
We introduce an operationally meaningful privacy measure called pointwise maximal cost (PMC) and demonstrate that imposing an upper bound on PMC is equivalent to enforcing a lower bound on the information density.
arXiv Detail & Related papers (2024-07-01T10:38:02Z) - Privacy Amplification for the Gaussian Mechanism via Bounded Support [64.86780616066575]
Data-dependent privacy accounting frameworks such as per-instance differential privacy (pDP) and Fisher information loss (FIL) confer fine-grained privacy guarantees for individuals in a fixed training dataset.
We propose simple modifications of the Gaussian mechanism with bounded support, showing that they amplify privacy guarantees under data-dependent accounting.
arXiv Detail & Related papers (2024-03-07T21:22:07Z) - PAC Privacy Preserving Diffusion Models [6.299952353968428]
Diffusion models can produce images with both high privacy and visual quality.
However, challenges arise such as in ensuring robust protection in privatizing specific data attributes.
We introduce the PAC Privacy Preserving Diffusion Model, a model leverages diffusion principles and ensure Probably Approximately Correct (PAC) privacy.
arXiv Detail & Related papers (2023-12-02T18:42:52Z) - Chained-DP: Can We Recycle Privacy Budget? [18.19895364709435]
We propose a novel Chained-DP framework enabling users to carry out data aggregation sequentially to recycle the privacy budget.
We show the mathematical nature of the sequential game, solve its Nash Equilibrium, and design an incentive mechanism with provable economic properties.
Our numerical simulation validates the effectiveness of Chained-DP, showing that it can significantly save privacy budget and lower estimation error compared to the traditional LDP mechanism.
arXiv Detail & Related papers (2023-09-12T08:07:59Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Privacy-Preserving Distributed Expectation Maximization for Gaussian
Mixture Model using Subspace Perturbation [4.2698418800007865]
federated learning is motivated by the privacy concern as it does not allow to transmit private data but only intermediate updates.
We propose a fully decentralized privacy-preserving solution, which is able to securely compute the updates in each step.
Numerical validation shows that the proposed approach has superior performance compared to the existing approach in terms of both the accuracy and privacy level.
arXiv Detail & Related papers (2022-09-16T09:58:03Z) - Differential Privacy at Risk: Bridging Randomness and Privacy Budget [5.393465689287103]
We analyse roles of the sources of randomness, namely the explicit randomness induced by the noise distribution and the implicit randomness induced by the data-generation distribution.
We propose privacy at risk that is a probabilistic calibration of privacy-preserving mechanisms.
We show that composition using the cost optimal privacy at risk provides stronger privacy guarantee than the classical advanced composition.
arXiv Detail & Related papers (2020-03-02T15:44:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.