Beyond the Calibration Point: Mechanism Comparison in Differential Privacy
- URL: http://arxiv.org/abs/2406.08918v2
- Date: Wed, 10 Jul 2024 08:01:26 GMT
- Title: Beyond the Calibration Point: Mechanism Comparison in Differential Privacy
- Authors: Georgios Kaissis, Stefan Kolek, Borja Balle, Jamie Hayes, Daniel Rueckert,
- Abstract summary: In differentially private (DP) machine learning, the privacy guarantees of DP mechanisms are often reported and compared on the basis of a single $(varepsilon, delta)$-pair.
This practice overlooks that DP guarantees can vary substantially even between mechanisms sharing a given $(varepsilon, delta)$.
- Score: 29.635987854560828
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In differentially private (DP) machine learning, the privacy guarantees of DP mechanisms are often reported and compared on the basis of a single $(\varepsilon, \delta)$-pair. This practice overlooks that DP guarantees can vary substantially even between mechanisms sharing a given $(\varepsilon, \delta)$, and potentially introduces privacy vulnerabilities which can remain undetected. This motivates the need for robust, rigorous methods for comparing DP guarantees in such cases. Here, we introduce the $\Delta$-divergence between mechanisms which quantifies the worst-case excess privacy vulnerability of choosing one mechanism over another in terms of $(\varepsilon, \delta)$, $f$-DP and in terms of a newly presented Bayesian interpretation. Moreover, as a generalisation of the Blackwell theorem, it is endowed with strong decision-theoretic foundations. Through application examples, we show that our techniques can facilitate informed decision-making and reveal gaps in the current understanding of privacy risks, as current practices in DP-SGD often result in choosing mechanisms with high excess privacy vulnerabilities.
Related papers
- Private Language Models via Truncated Laplacian Mechanism [18.77713904999236]
We propose a novel private embedding method called the high dimensional truncated Laplacian mechanism.
We show that our method has a lower variance compared to the previous private word embedding methods.
Remarkably, even in the high privacy regime, our approach only incurs a slight decrease in utility compared to the non-private scenario.
arXiv Detail & Related papers (2024-10-10T15:25:02Z) - Privacy Amplification for the Gaussian Mechanism via Bounded Support [64.86780616066575]
Data-dependent privacy accounting frameworks such as per-instance differential privacy (pDP) and Fisher information loss (FIL) confer fine-grained privacy guarantees for individuals in a fixed training dataset.
We propose simple modifications of the Gaussian mechanism with bounded support, showing that they amplify privacy guarantees under data-dependent accounting.
arXiv Detail & Related papers (2024-03-07T21:22:07Z) - Unified Mechanism-Specific Amplification by Subsampling and Group Privacy Amplification [54.1447806347273]
Amplification by subsampling is one of the main primitives in machine learning with differential privacy.
We propose the first general framework for deriving mechanism-specific guarantees.
We analyze how subsampling affects the privacy of groups of multiple users.
arXiv Detail & Related papers (2024-03-07T19:36:05Z) - Fixed-Budget Differentially Private Best Arm Identification [62.36929749450298]
We study best arm identification (BAI) in linear bandits in the fixed-budget regime under differential privacy constraints.
We derive a minimax lower bound on the error probability, and demonstrate that the lower and the upper bounds decay exponentially in $T$.
arXiv Detail & Related papers (2024-01-17T09:23:25Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Directional Privacy for Deep Learning [2.826489388853448]
Differentially Private Gradient Descent (DP-SGD) is a key method for applying privacy in the training of deep learning models.
Metric DP, however, can provide alternative mechanisms based on arbitrary metrics that might be more suitable for preserving utility.
We show that this provides both $epsilon$-DP and $epsilon d$-privacy for deep learning training, rather than the $(epsilon, delta)$-privacy of the Gaussian mechanism.
arXiv Detail & Related papers (2022-11-09T05:18:08Z) - Connect the Dots: Tighter Discrete Approximations of Privacy Loss
Distributions [49.726408540784334]
Key question in PLD-based accounting is how to approximate any (potentially continuous) PLD with a PLD over any specified discrete support.
We show that our pessimistic estimate is the best possible among all pessimistic estimates.
arXiv Detail & Related papers (2022-07-10T04:25:02Z) - The Poisson binomial mechanism for secure and private federated learning [19.399122892615573]
We introduce a discrete differential privacy mechanism for distributed mean estimation (DME) with applications to federated learning and analytics.
We provide a tight analysis of its privacy guarantees, showing that it achieves the same privacy-accuracy trade-offs as the continuous Gaussian mechanism.
arXiv Detail & Related papers (2022-07-09T05:46:28Z) - A unified interpretation of the Gaussian mechanism for differential
privacy through the sensitivity index [61.675604648670095]
We argue that the three prevailing interpretations of the GM, namely $(varepsilon, delta)$-DP, f-DP and R'enyi DP can be expressed by using a single parameter $psi$, which we term the sensitivity index.
$psi$ uniquely characterises the GM and its properties by encapsulating its two fundamental quantities: the sensitivity of the query and the magnitude of the noise perturbation.
arXiv Detail & Related papers (2021-09-22T06:20:01Z) - Local Differential Privacy Is Equivalent to Contraction of
$E_\gamma$-Divergence [7.807294944710216]
We show that LDP constraints can be equivalently cast in terms of the contraction coefficient of the $E_gamma$-divergence.
We then use this equivalent formula to express LDP guarantees of privacy mechanisms in terms of contraction coefficients of arbitrary $f$-divergences.
arXiv Detail & Related papers (2021-02-02T02:18:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.