Computing Differential Privacy Guarantees for Heterogeneous Compositions
Using FFT
- URL: http://arxiv.org/abs/2102.12412v1
- Date: Wed, 24 Feb 2021 17:05:38 GMT
- Title: Computing Differential Privacy Guarantees for Heterogeneous Compositions
Using FFT
- Authors: Antti Koskela and Antti Honkela
- Abstract summary: We show how to evaluate tight privacy guarantees using the Plancherel theorem at the cost of increased pre-computation and memory usage.
We also show how to speed up the evaluation of tight privacy guarantees using the Plancherel theorem at the cost of increased pre-computation and memory usage.
- Score: 5.837881923712393
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The recently proposed Fast Fourier Transform (FFT)-based accountant for
evaluating $(\varepsilon,\delta)$-differential privacy guarantees using the
privacy loss distribution formalism has been shown to give tighter bounds than
commonly used methods such as R\'enyi accountants when applied to compositions
of homogeneous mechanisms. This approach is also applicable to certain discrete
mechanisms that cannot be analysed with R\'enyi accountants. In this paper, we
extend this approach to compositions of heterogeneous mechanisms. We carry out
a full error analysis that allows choosing the parameters of the algorithm such
that a desired accuracy is obtained. Using our analysis, we also give a bound
for the computational complexity in terms of the error which is analogous to
and slightly tightens the one given by Murtagh and Vadhan (2018). We also show
how to speed up the evaluation of tight privacy guarantees using the Plancherel
theorem at the cost of increased pre-computation and memory usage.
Related papers
- Instance-Specific Asymmetric Sensitivity in Differential Privacy [2.855485723554975]
We build upon previous work that gives a paradigm for selecting an output through the exponential mechanism.
Our framework will slightly modify the closeness metric and instead give a simple and efficient application of the sparse vector technique.
arXiv Detail & Related papers (2023-11-02T05:01:45Z) - Privacy Induces Robustness: Information-Computation Gaps and Sparse Mean
Estimation [8.9598796481325]
We investigate the consequences of this observation for both algorithms and computational complexity across different statistical problems.
We establish an information-computation gap for private sparse mean estimation.
We also give evidence for privacy-induced information-computation gaps for several other statistics and learning problems.
arXiv Detail & Related papers (2022-11-01T20:03:41Z) - Individual Privacy Accounting with Gaussian Differential Privacy [8.81666701090743]
Individual privacy accounting enables bounding differential privacy (DP) loss individually for each participant involved in the analysis.
In order to account for the individual privacy losses in a principled manner, we need a privacy accountant for adaptive compositions of randomised mechanisms.
arXiv Detail & Related papers (2022-09-30T17:19:40Z) - Nonparametric extensions of randomized response for private confidence sets [51.75485869914048]
This work derives methods for performing nonparametric, nonasymptotic statistical inference for population means under the constraint of local differential privacy (LDP)
We present confidence intervals (CI) and time-uniform confidence sequences (CS) for $mustar$ when only given access to the privatized data.
arXiv Detail & Related papers (2022-02-17T16:04:49Z) - A general sample complexity analysis of vanilla policy gradient [101.16957584135767]
Policy gradient (PG) is one of the most popular reinforcement learning (RL) problems.
"vanilla" theoretical understanding of PG trajectory is one of the most popular methods for solving RL problems.
arXiv Detail & Related papers (2021-07-23T19:38:17Z) - Sigma-Delta and Distributed Noise-Shaping Quantization Methods for
Random Fourier Features [73.25551965751603]
We prove that our quantized RFFs allow a high accuracy approximation of the underlying kernels.
We show that the quantized RFFs can be further compressed, yielding an excellent trade-off between memory use and accuracy.
We empirically show by testing the performance of our methods on several machine learning tasks that our method compares favorably to other state of the art quantization methods in this context.
arXiv Detail & Related papers (2021-06-04T17:24:47Z) - Adversarial Robustness Guarantees for Gaussian Processes [22.403365399119107]
Gaussian processes (GPs) enable principled computation of model uncertainty, making them attractive for safety-critical applications.
We present a framework to analyse adversarial robustness of GPs, defined as invariance of the model's decision to bounded perturbations.
We develop a branch-and-bound scheme to refine the bounds and show, for any $epsilon > 0$, that our algorithm is guaranteed to converge to values $epsilon$-close to the actual values in finitely many iterations.
arXiv Detail & Related papers (2021-04-07T15:14:56Z) - Byzantine-Resilient Non-Convex Stochastic Gradient Descent [61.6382287971982]
adversary-resilient distributed optimization, in which.
machines can independently compute gradients, and cooperate.
Our algorithm is based on a new concentration technique, and its sample complexity.
It is very practical: it improves upon the performance of all prior methods when no.
setting machines are present.
arXiv Detail & Related papers (2020-12-28T17:19:32Z) - Tight Differential Privacy for Discrete-Valued Mechanisms and for the
Subsampled Gaussian Mechanism Using FFT [6.929834518749884]
We propose a numerical accountant for evaluating the tight $(varepsilon,delta)$-privacy loss for algorithms with discrete one dimensional output.
We show that our approach allows decreasing noise variance up to 75 percent at equal privacy compared to existing bounds in the literature.
arXiv Detail & Related papers (2020-06-12T12:46:42Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z) - Distributional Robustness and Regularization in Reinforcement Learning [62.23012916708608]
We introduce a new regularizer for empirical value functions and show that it lower bounds the Wasserstein distributionally robust value function.
It suggests using regularization as a practical tool for dealing with $textitexternal uncertainty$ in reinforcement learning.
arXiv Detail & Related papers (2020-03-05T19:56:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.