Connect the Dots: Tighter Discrete Approximations of Privacy Loss
Distributions
- URL: http://arxiv.org/abs/2207.04380v1
- Date: Sun, 10 Jul 2022 04:25:02 GMT
- Title: Connect the Dots: Tighter Discrete Approximations of Privacy Loss
Distributions
- Authors: Vadym Doroshenko and Badih Ghazi and Pritish Kamath and Ravi Kumar and
Pasin Manurangsi
- Abstract summary: Key question in PLD-based accounting is how to approximate any (potentially continuous) PLD with a PLD over any specified discrete support.
We show that our pessimistic estimate is the best possible among all pessimistic estimates.
- Score: 49.726408540784334
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The privacy loss distribution (PLD) provides a tight characterization of the
privacy loss of a mechanism in the context of differential privacy (DP). Recent
work has shown that PLD-based accounting allows for tighter $(\varepsilon,
\delta)$-DP guarantees for many popular mechanisms compared to other known
methods. A key question in PLD-based accounting is how to approximate any
(potentially continuous) PLD with a PLD over any specified discrete support.
We present a novel approach to this problem. Our approach supports both
pessimistic estimation, which overestimates the hockey-stick divergence (i.e.,
$\delta$) for any value of $\varepsilon$, and optimistic estimation, which
underestimates the hockey-stick divergence. Moreover, we show that our
pessimistic estimate is the best possible among all pessimistic estimates.
Experimental evaluation shows that our approach can work with much larger
discretization intervals while keeping a similar error bound compared to
previous approaches and yet give a better approximation than existing methods.
Related papers
- Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective [57.35402286842029]
Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
arXiv Detail & Related papers (2024-08-28T08:22:21Z) - Beyond the Calibration Point: Mechanism Comparison in Differential Privacy [29.635987854560828]
In differentially private (DP) machine learning, the privacy guarantees of DP mechanisms are often reported and compared on the basis of a single $(varepsilon, delta)$-pair.
This practice overlooks that DP guarantees can vary substantially even between mechanisms sharing a given $(varepsilon, delta)$.
arXiv Detail & Related papers (2024-06-13T08:30:29Z) - Fixed-Budget Differentially Private Best Arm Identification [62.36929749450298]
We study best arm identification (BAI) in linear bandits in the fixed-budget regime under differential privacy constraints.
We derive a minimax lower bound on the error probability, and demonstrate that the lower and the upper bounds decay exponentially in $T$.
arXiv Detail & Related papers (2024-01-17T09:23:25Z) - A Generalized Shuffle Framework for Privacy Amplification: Strengthening Privacy Guarantees and Enhancing Utility [4.7712438974100255]
We show how to shuffle $(epsilon_i,delta_i)$-PLDP setting with personalized privacy parameters.
We prove that shuffled $(epsilon_i,delta_i)$-PLDP process approximately preserves $mu$-Gaussian Differential Privacy with mu = sqrtfrac2sum_i=1n frac1-delta_i1+eepsilon_i-max_ifrac1-delta_i1+e
arXiv Detail & Related papers (2023-12-22T02:31:46Z) - Online non-parametric likelihood-ratio estimation by Pearson-divergence
functional minimization [55.98760097296213]
We introduce a new framework for online non-parametric LRE (OLRE) for the setting where pairs of iid observations $(x_t sim p, x'_t sim q)$ are observed over time.
We provide theoretical guarantees for the performance of the OLRE method along with empirical validation in synthetic experiments.
arXiv Detail & Related papers (2023-11-03T13:20:11Z) - Robust computation of optimal transport by $\beta$-potential
regularization [79.24513412588745]
Optimal transport (OT) has become a widely used tool in the machine learning field to measure the discrepancy between probability distributions.
We propose regularizing OT with the beta-potential term associated with the so-called $beta$-divergence.
We experimentally demonstrate that the transport matrix computed with our algorithm helps estimate a probability distribution robustly even in the presence of outliers.
arXiv Detail & Related papers (2022-12-26T18:37:28Z) - On the Pitfalls of Heteroscedastic Uncertainty Estimation with
Probabilistic Neural Networks [23.502721524477444]
We present a synthetic example illustrating how this approach can lead to very poor but stable estimates.
We identify the culprit to be the log-likelihood loss, along with certain conditions that exacerbate the issue.
We present an alternative formulation, termed $beta$-NLL, in which each data point's contribution to the loss is weighted by the $beta$-exponentiated variance estimate.
arXiv Detail & Related papers (2022-03-17T08:46:17Z) - Local Differential Privacy Is Equivalent to Contraction of
$E_\gamma$-Divergence [7.807294944710216]
We show that LDP constraints can be equivalently cast in terms of the contraction coefficient of the $E_gamma$-divergence.
We then use this equivalent formula to express LDP guarantees of privacy mechanisms in terms of contraction coefficients of arbitrary $f$-divergences.
arXiv Detail & Related papers (2021-02-02T02:18:12Z) - CoinDICE: Off-Policy Confidence Interval Estimation [107.86876722777535]
We study high-confidence behavior-agnostic off-policy evaluation in reinforcement learning.
We show in a variety of benchmarks that the confidence interval estimates are tighter and more accurate than existing methods.
arXiv Detail & Related papers (2020-10-22T12:39:11Z) - Tight Differential Privacy for Discrete-Valued Mechanisms and for the
Subsampled Gaussian Mechanism Using FFT [6.929834518749884]
We propose a numerical accountant for evaluating the tight $(varepsilon,delta)$-privacy loss for algorithms with discrete one dimensional output.
We show that our approach allows decreasing noise variance up to 75 percent at equal privacy compared to existing bounds in the literature.
arXiv Detail & Related papers (2020-06-12T12:46:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.