A unified interpretation of the Gaussian mechanism for differential
privacy through the sensitivity index
- URL: http://arxiv.org/abs/2109.10528v1
- Date: Wed, 22 Sep 2021 06:20:01 GMT
- Title: A unified interpretation of the Gaussian mechanism for differential
privacy through the sensitivity index
- Authors: Georgios Kaissis, Moritz Knolle, Friederike Jungmann, Alexander
Ziller, Dmitrii Usynin, Daniel Rueckert
- Abstract summary: We argue that the three prevailing interpretations of the GM, namely $(varepsilon, delta)$-DP, f-DP and R'enyi DP can be expressed by using a single parameter $psi$, which we term the sensitivity index.
$psi$ uniquely characterises the GM and its properties by encapsulating its two fundamental quantities: the sensitivity of the query and the magnitude of the noise perturbation.
- Score: 61.675604648670095
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Gaussian mechanism (GM) represents a universally employed tool for
achieving differential privacy (DP), and a large body of work has been devoted
to its analysis. We argue that the three prevailing interpretations of the GM,
namely $(\varepsilon, \delta)$-DP, f-DP and R\'enyi DP can be expressed by
using a single parameter $\psi$, which we term the sensitivity index. $\psi$
uniquely characterises the GM and its properties by encapsulating its two
fundamental quantities: the sensitivity of the query and the magnitude of the
noise perturbation. With strong links to the ROC curve and the
hypothesis-testing interpretation of DP, $\psi$ offers the practitioner a
powerful method for interpreting, comparing and communicating the privacy
guarantees of Gaussian mechanisms.
Related papers
- Beyond the Calibration Point: Mechanism Comparison in Differential Privacy [29.635987854560828]
In differentially private (DP) machine learning, the privacy guarantees of DP mechanisms are often reported and compared on the basis of a single $(varepsilon, delta)$-pair.
This practice overlooks that DP guarantees can vary substantially even between mechanisms sharing a given $(varepsilon, delta)$.
arXiv Detail & Related papers (2024-06-13T08:30:29Z) - How Private are DP-SGD Implementations? [61.19794019914523]
We show that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
Our result shows that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
arXiv Detail & Related papers (2024-03-26T13:02:43Z) - Less is More: Revisiting the Gaussian Mechanism for Differential Privacy [8.89234867625102]
Differential privacy via output perturbation has been a de facto standard for releasing query or computation results on sensitive data.
We identify that all existing Gaussian mechanisms suffer from the curse of full-rank covariance matrices.
arXiv Detail & Related papers (2023-06-04T04:14:38Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Generalised Likelihood Ratio Testing Adversaries through the
Differential Privacy Lens [69.10072367807095]
Differential Privacy (DP) provides tight upper bounds on the capabilities of optimal adversaries.
We relax the assumption of a Neyman--Pearson (NPO) adversary to a Generalized Likelihood Test (GLRT) adversary.
This mild relaxation leads to improved privacy guarantees.
arXiv Detail & Related papers (2022-10-24T08:24:10Z) - The Poisson binomial mechanism for secure and private federated learning [19.399122892615573]
We introduce a discrete differential privacy mechanism for distributed mean estimation (DME) with applications to federated learning and analytics.
We provide a tight analysis of its privacy guarantees, showing that it achieves the same privacy-accuracy trade-offs as the continuous Gaussian mechanism.
arXiv Detail & Related papers (2022-07-09T05:46:28Z) - Certifiably Robust Interpretation via Renyi Differential Privacy [77.04377192920741]
We study the problem of interpretation robustness from a new perspective of Renyi differential privacy (RDP)
First, it can offer provable and certifiable top-$k$ robustness.
Second, our proposed method offers $sim10%$ better experimental robustness than existing approaches.
Third, our method can provide a smooth tradeoff between robustness and computational efficiency.
arXiv Detail & Related papers (2021-07-04T06:58:01Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z) - Improved Matrix Gaussian Mechanism for Differential Privacy [29.865497421453917]
Differential privacy (DP) mechanisms are conventionally developed for scalar values, not for structural data like matrices.
Our work proposes Improved Matrix Gaussian Mechanism (IMGM) for matrix-valued DP, based on the necessary and sufficient condition of $ (varepsilon,delta) $-differential privacy.
Among the legitimate noise distributions for matrix-valued DP, we find the optimal one turns out to be i.i.d.
Experiments on a variety of models and datasets also verify that IMGM yields much higher utility than the state-of-the-art mechanisms at the same privacy guarantee
arXiv Detail & Related papers (2021-04-30T07:44:53Z) - Tight Differential Privacy for Discrete-Valued Mechanisms and for the
Subsampled Gaussian Mechanism Using FFT [6.929834518749884]
We propose a numerical accountant for evaluating the tight $(varepsilon,delta)$-privacy loss for algorithms with discrete one dimensional output.
We show that our approach allows decreasing noise variance up to 75 percent at equal privacy compared to existing bounds in the literature.
arXiv Detail & Related papers (2020-06-12T12:46:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.