Interpreting Differential Privacy in Terms of Disclosure Risk
- URL: http://arxiv.org/abs/2507.09699v1
- Date: Sun, 13 Jul 2025 16:20:13 GMT
- Title: Interpreting Differential Privacy in Terms of Disclosure Risk
- Authors: Zeki Kazan, Sagar Sharma, Wanrong Zhang, Bo Jiang, Qiang Yan,
- Abstract summary: We show novel relationships between differential privacy and measures of statistical disclosure risk.<n>We suggest how experts and non-experts can use these results to explain the DP guarantee.
- Score: 8.236691013291452
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As the use of differential privacy (DP) becomes widespread, the development of effective tools for reasoning about the privacy guarantee becomes increasingly critical. In pursuit of this goal, we demonstrate novel relationships between DP and measures of statistical disclosure risk. We suggest how experts and non-experts can use these results to explain the DP guarantee, interpret DP composition theorems, select and justify privacy parameters, and identify worst-case adversary prior probabilities.
Related papers
- Your Privacy Depends on Others: Collusion Vulnerabilities in Individual Differential Privacy [50.66105844449181]
Individual Differential Privacy (iDP) promises users control over their privacy, but this promise can be broken in practice.<n>We reveal a previously overlooked vulnerability in sampling-based iDP mechanisms.<n>We propose $(varepsilon_i,_i,overline)$-iDP a privacy contract that uses $$-divergences to provide users with a hard upper bound on their excess vulnerability.
arXiv Detail & Related papers (2026-01-19T10:26:12Z) - How to Get Actual Privacy and Utility from Privacy Models: the k-Anonymity and Differential Privacy Families [3.9894389299295514]
Privacy models were introduced in privacy-preserving data publishing and statistical disclosure control.<n>We find they may fail to provide adequate protection guarantees because of problems in their definition.<n>We argue that a semantic reformulation of k-anonymity can offer more robust privacy without losing utility.
arXiv Detail & Related papers (2025-10-13T11:41:12Z) - On the MIA Vulnerability Gap Between Private GANs and Diffusion Models [51.53790101362898]
Generative Adversarial Networks (GANs) and diffusion models have emerged as leading approaches for high-quality image synthesis.<n>We present the first unified theoretical and empirical analysis of the privacy risks faced by differentially private generative models.
arXiv Detail & Related papers (2025-09-03T14:18:22Z) - DPolicy: Managing Privacy Risks Across Multiple Releases with Differential Privacy [44.27723721899118]
We present DPolicy, a system designed to manage cumulative privacy risks across multiple data releases using Differential Privacy (DP)<n>Unlike traditional approaches that treat each release in isolation or rely on a single (global) DP guarantee, our system employs a flexible framework that considers multiple DP guarantees simultaneously.<n>DPolicy introduces a high-level policy language to formalize privacy guarantees, making traditionally implicit assumptions on scopes and contexts explicit.
arXiv Detail & Related papers (2025-05-10T19:49:51Z) - Evaluating Differential Privacy on Correlated Datasets Using Pointwise Maximal Leakage [38.4830633082184]
Data-driven advancements pose substantial risks to privacy.<n> differential privacy has become a cornerstone in privacy preservation efforts.<n>Our work aims to foster a deeper understanding of subtle privacy risks and highlight the need for the development of more effective privacy-preserving mechanisms.
arXiv Detail & Related papers (2025-02-08T10:30:45Z) - Meeting Utility Constraints in Differential Privacy: A Privacy-Boosting Approach [7.970280110429423]
We propose a privacy-boosting framework that is compatible with most noise-adding DP mechanisms.<n>Our framework enhances the likelihood of outputs falling within a preferred subset of the support to meet utility requirements.<n>We show that our framework achieves lower privacy loss than standard DP mechanisms under utility constraints.
arXiv Detail & Related papers (2024-12-13T23:34:30Z) - Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective [57.35402286842029]
Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
arXiv Detail & Related papers (2024-08-28T08:22:21Z) - Differential Confounding Privacy and Inverse Composition [32.85314813605347]
We introduce textitdifferential confounding privacy (DCP), a specialized form of the Pufferfish privacy framework.<n>We show that while DCP mechanisms retain privacy guarantees under composition, they lack the graceful compositional properties of DP.<n>We propose an textitInverse Composition (IC) framework, where a leader-follower model optimally designs a privacy strategy to achieve target guarantees.
arXiv Detail & Related papers (2024-08-21T21:45:13Z) - Federated Experiment Design under Distributed Differential Privacy [31.06808163362162]
We focus on the rigorous protection of users' privacy while minimizing the trust toward service providers.
Although a vital component in modern A/B testing, private distributed experimentation has not previously been studied.
We show how these mechanisms can be scaled up to handle the very large number of participants commonly found in practice.
arXiv Detail & Related papers (2023-11-07T22:38:56Z) - Improving the Variance of Differentially Private Randomized Experiments through Clustering [16.166525280886578]
We propose a new differentially private mechanism, "Cluster-DP"<n>We demonstrate that selecting higher-quality clusters, according to a quality metric we introduce, can decrease the variance penalty without compromising privacy guarantees.
arXiv Detail & Related papers (2023-08-02T05:51:57Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Generalised Likelihood Ratio Testing Adversaries through the
Differential Privacy Lens [69.10072367807095]
Differential Privacy (DP) provides tight upper bounds on the capabilities of optimal adversaries.
We relax the assumption of a Neyman--Pearson (NPO) adversary to a Generalized Likelihood Test (GLRT) adversary.
This mild relaxation leads to improved privacy guarantees.
arXiv Detail & Related papers (2022-10-24T08:24:10Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.