Confounding Privacy and Inverse Composition
- URL: http://arxiv.org/abs/2408.12010v1
- Date: Wed, 21 Aug 2024 21:45:13 GMT
- Title: Confounding Privacy and Inverse Composition
- Authors: Tao Zhang, Bradley A. Malin, Netanel Raviv, Yevgeniy Vorobeychik,
- Abstract summary: In differential privacy, sensitive information is contained in the dataset while in Pufferfish privacy, sensitive information determines data distribution.
We introduce a novel privacy notion of ($epsilon, delta$)-confounding privacy that generalizes both differential privacy and Pufferfish privacy.
- Score: 32.85314813605347
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a novel privacy notion of ($\epsilon, \delta$)-confounding privacy that generalizes both differential privacy and Pufferfish privacy. In differential privacy, sensitive information is contained in the dataset while in Pufferfish privacy, sensitive information determines data distribution. Consequently, both assume a chain-rule relationship between the sensitive information and the output of privacy mechanisms. Confounding privacy, in contrast, considers general causal relationships between the dataset and sensitive information. One of the key properties of differential privacy is that it can be easily composed over multiple interactions with the mechanism that maps private data to publicly shared information. In contrast, we show that the quantification of the privacy loss under the composition of independent ($\epsilon, \delta$)-confounding private mechanisms using the optimal composition of differential privacy \emph{underestimates} true privacy loss. To address this, we characterize an inverse composition framework to tightly implement a target global ($\epsilon_{g}, \delta_{g}$)-confounding privacy under composition while keeping individual mechanisms independent and private. In particular, we propose a novel copula-perturbation method which ensures that (1) each individual mechanism $i$ satisfies a target local ($\epsilon_{i}, \delta_{i}$)-confounding privacy and (2) the target global ($\epsilon_{g}, \delta_{g}$)-confounding privacy is tightly implemented by solving an optimization problem. Finally, we study inverse composition empirically on real datasets.
Related papers
- Models Matter: Setting Accurate Privacy Expectations for Local and Central Differential Privacy [14.40391109414476]
We design and evaluate new explanations of differential privacy for the local and central models.
We find that consequences-focused explanations in the style of privacy nutrition labels are a promising approach for setting accurate privacy expectations.
arXiv Detail & Related papers (2024-08-16T01:21:57Z) - Adaptive Privacy Composition for Accuracy-first Mechanisms [55.53725113597539]
Noise reduction mechanisms produce increasingly accurate answers.
Analysts only pay the privacy cost of the least noisy or most accurate answer released.
There has yet to be any study on how ex-post private mechanisms compose.
We develop privacy filters that allow an analyst to adaptively switch between differentially private and ex-post private mechanisms.
arXiv Detail & Related papers (2023-06-24T00:33:34Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Smooth Anonymity for Sparse Graphs [69.1048938123063]
differential privacy has emerged as the gold standard of privacy, however, when it comes to sharing sparse datasets.
In this work, we consider a variation of $k$-anonymity, which we call smooth-$k$-anonymity, and design simple large-scale algorithms that efficiently provide smooth-$k$-anonymity.
arXiv Detail & Related papers (2022-07-13T17:09:25Z) - Fully Adaptive Composition in Differential Privacy [53.01656650117495]
Well-known advanced composition theorems allow one to query a private database quadratically more times than basic privacy composition would permit.
We introduce fully adaptive composition, wherein both algorithms and their privacy parameters can be selected adaptively.
We construct filters that match the rates of advanced composition, including constants, despite allowing for adaptively chosen privacy parameters.
arXiv Detail & Related papers (2022-03-10T17:03:12Z) - Privately Publishable Per-instance Privacy [21.775752827149383]
We consider how to privately share the personalized privacy losses incurred by objective perturbation, using per-instance differential privacy (pDP)
We analyze the per-instance privacy loss of releasing a private empirical risk minimizer learned via objective perturbation, and propose a group of methods to privately and accurately publish the pDP losses at little to no additional privacy cost.
arXiv Detail & Related papers (2021-11-03T15:17:29Z) - "I need a better description'': An Investigation Into User Expectations
For Differential Privacy [31.352325485393074]
We explore users' privacy expectations related to differential privacy.
We find that users care about the kinds of information leaks against which differential privacy protects.
We find that the ways in which differential privacy is described in-the-wild haphazardly set users' privacy expectations.
arXiv Detail & Related papers (2021-10-13T02:36:37Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - Auditing Differentially Private Machine Learning: How Private is Private
SGD? [16.812900569416062]
We investigate whether Differentially Private SGD offers better privacy in practice than what is guaranteed by its state-of-the-art analysis.
We do so via novel data poisoning attacks, which we show correspond to realistic privacy attacks.
arXiv Detail & Related papers (2020-06-13T20:00:18Z) - Bounding, Concentrating, and Truncating: Unifying Privacy Loss
Composition for Data Analytics [2.614355818010333]
We provide strong privacy loss bounds when an analyst may select pure DP, bounded range (e.g. exponential mechanisms) or concentrated DP mechanisms in any order.
We also provide optimal privacy loss bounds that apply when an analyst can select pure DP and bounded range mechanisms in a batch.
arXiv Detail & Related papers (2020-04-15T17:33:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.