Differential Confounding Privacy and Inverse Composition
- URL: http://arxiv.org/abs/2408.12010v3
- Date: Thu, 23 Jan 2025 04:05:47 GMT
- Title: Differential Confounding Privacy and Inverse Composition
- Authors: Tao Zhang, Bradley A. Malin, Netanel Raviv, Yevgeniy Vorobeychik,
- Abstract summary: We introduce Differential Confounding Privacy (DCP), a framework that generalizes Differential Privacy (DP)
We show that DCP mechanisms retain privacy guarantees under composition, but they lack the graceful compositional properties of DP.
We propose an Inverse Composition (IC) framework, where a leader-follower model optimally designs a privacy strategy to achieve target guarantees without relying on worst-case privacy proofs.
- Score: 32.85314813605347
- License:
- Abstract: Differential privacy (DP) has become the gold standard for privacy-preserving data analysis, but its applicability can be limited in scenarios involving complex dependencies between sensitive information and datasets. To address this, we introduce Differential Confounding Privacy (DCP), a framework that generalizes DP by accounting for broader causal relationships between secrets and datasets. DCP adopts the $(\epsilon, \delta)$-privacy framework to quantify privacy loss, particularly under the composition of multiple mechanisms accessing the same dataset. We show that while DCP mechanisms retain privacy guarantees under composition, they lack the graceful compositional properties of DP. To overcome this, we propose an Inverse Composition (IC) framework, where a leader-follower model optimally designs a privacy strategy to achieve target guarantees without relying on worst-case privacy proofs. Experimental results validate IC's effectiveness in managing privacy budgets and ensuring rigorous privacy guarantees under composition.
Related papers
- Meeting Utility Constraints in Differential Privacy: A Privacy-Boosting Approach [7.970280110429423]
We propose a privacy-boosting framework that is compatible with most noise-adding DP mechanisms.
Our framework enhances the likelihood of outputs falling within a preferred subset of the support to meet utility requirements.
We show that our framework achieves lower privacy loss than standard DP mechanisms under utility constraints.
arXiv Detail & Related papers (2024-12-13T23:34:30Z) - Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - Provable Privacy with Non-Private Pre-Processing [56.770023668379615]
We propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms.
Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions.
arXiv Detail & Related papers (2024-03-19T17:54:49Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Privately Publishable Per-instance Privacy [21.775752827149383]
We consider how to privately share the personalized privacy losses incurred by objective perturbation, using per-instance differential privacy (pDP)
We analyze the per-instance privacy loss of releasing a private empirical risk minimizer learned via objective perturbation, and propose a group of methods to privately and accurately publish the pDP losses at little to no additional privacy cost.
arXiv Detail & Related papers (2021-11-03T15:17:29Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - Bounding, Concentrating, and Truncating: Unifying Privacy Loss
Composition for Data Analytics [2.614355818010333]
We provide strong privacy loss bounds when an analyst may select pure DP, bounded range (e.g. exponential mechanisms) or concentrated DP mechanisms in any order.
We also provide optimal privacy loss bounds that apply when an analyst can select pure DP and bounded range mechanisms in a batch.
arXiv Detail & Related papers (2020-04-15T17:33:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.