Three Variants of Differential Privacy: Lossless Conversion and
Applications
- URL: http://arxiv.org/abs/2008.06529v2
- Date: Sat, 23 Jan 2021 19:34:51 GMT
- Title: Three Variants of Differential Privacy: Lossless Conversion and
Applications
- Authors: Shahab Asoodeh, Jiachun Liao, Flavio P. Calmon, Oliver Kosut, and
Lalitha Sankar
- Abstract summary: We consider three different variants of differential privacy (DP), namely approximate DP, R'enyi RDP, and hypothesis test.
In the first part, we develop a machinery for relating approximate DP to iterations based on the joint range of two $f$-divergences.
As an application, we apply our result to the moments framework for characterizing privacy guarantees of noisy gradient descent.
- Score: 13.057076084452016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider three different variants of differential privacy (DP), namely
approximate DP, R\'enyi DP (RDP), and hypothesis test DP. In the first part, we
develop a machinery for optimally relating approximate DP to RDP based on the
joint range of two $f$-divergences that underlie the approximate DP and RDP. In
particular, this enables us to derive the optimal approximate DP parameters of
a mechanism that satisfies a given level of RDP. As an application, we apply
our result to the moments accountant framework for characterizing privacy
guarantees of noisy stochastic gradient descent (SGD). When compared to the
state-of-the-art, our bounds may lead to about 100 more stochastic gradient
descent iterations for training deep learning models for the same privacy
budget. In the second part, we establish a relationship between RDP and
hypothesis test DP which allows us to translate the RDP constraint into a
tradeoff between type I and type II error probabilities of a certain binary
hypothesis test. We then demonstrate that for noisy SGD our result leads to
tighter privacy guarantees compared to the recently proposed $f$-DP framework
for some range of parameters.
Related papers
- Correlated Privacy Mechanisms for Differentially Private Distributed Mean Estimation [8.660393575612169]
CorDP-DME is a novel DP-DME that spans the gap between local differential privacy (LDP) and distributed DP (SecAgg)
We provide an information-theoretic analysis of CorDP-DME, and derive theoretical guarantees for utility under any given privacy parameters.
arXiv Detail & Related papers (2024-07-03T17:22:33Z) - Improved Communication-Privacy Trade-offs in $L_2$ Mean Estimation under Streaming Differential Privacy [47.997934291881414]
Existing mean estimation schemes are usually optimized for $L_infty$ geometry and rely on random rotation or Kashin's representation to adapt to $L$ geometry.
We introduce a novel privacy accounting method for the sparsified Gaussian mechanism that incorporates the randomness inherent in sparsification into the DP.
Unlike previous approaches, our accounting algorithm directly operates in $L$ geometry, yielding MSEs that fast converge to those of the Gaussian mechanism.
arXiv Detail & Related papers (2024-05-02T03:48:47Z) - How Private are DP-SGD Implementations? [61.19794019914523]
We show that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
Our result shows that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
arXiv Detail & Related papers (2024-03-26T13:02:43Z) - Differentially Private SGD Without Clipping Bias: An Error-Feedback Approach [62.000948039914135]
Using Differentially Private Gradient Descent with Gradient Clipping (DPSGD-GC) to ensure Differential Privacy (DP) comes at the cost of model performance degradation.
We propose a new error-feedback (EF) DP algorithm as an alternative to DPSGD-GC.
We establish an algorithm-specific DP analysis for our proposed algorithm, providing privacy guarantees based on R'enyi DP.
arXiv Detail & Related papers (2023-11-24T17:56:44Z) - Differentially Private Episodic Reinforcement Learning with Heavy-tailed
Rewards [12.809396600279479]
We study the problem of Markov decision processes (MDPs) with heavy-tailed rewards under the constraint of differential privacy (DP)
By resorting to robust mean estimators for rewards, we first propose two frameworks for heavy-tailed MDPs.
We show that there are fundamental differences between the problem of private RL with sub-Gaussian and that with heavy-tailed rewards.
arXiv Detail & Related papers (2023-06-01T20:18:39Z) - Connect the Dots: Tighter Discrete Approximations of Privacy Loss
Distributions [49.726408540784334]
Key question in PLD-based accounting is how to approximate any (potentially continuous) PLD with a PLD over any specified discrete support.
We show that our pessimistic estimate is the best possible among all pessimistic estimates.
arXiv Detail & Related papers (2022-07-10T04:25:02Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - DP-FP: Differentially Private Forward Propagation for Large Models [2.062295244789704]
We show how to mitigate the performance drop by replacing the Differential Private Gradient Descent with a novel DP Forward-Propagation (DP-FP)
Our DP-FP achieves an average accuracy of 91.34% with privacy budgets less than 3, representing a 3.81% performance improvement over the state-of-the-art DP-SGD.
arXiv Detail & Related papers (2021-12-29T07:32:29Z) - Differentially Private Coordinate Descent for Composite Empirical Risk
Minimization [13.742100810492014]
Machine learning models can leak information about the data used to train them.
Differentially Private (DP) variants of optimization algorithms like Gradient Descent (DP-SGD) have been designed to mitigate this.
We propose a new method for composite Differentially Private Empirical Risk Minimization (DP-ERM): Differentially Private Coordinate Descent (DP-CD)
arXiv Detail & Related papers (2021-10-22T10:22:48Z) - Private Stochastic Non-Convex Optimization: Adaptive Algorithms and
Tighter Generalization Bounds [72.63031036770425]
We propose differentially private (DP) algorithms for bound non-dimensional optimization.
We demonstrate two popular deep learning methods on the empirical advantages over standard gradient methods.
arXiv Detail & Related papers (2020-06-24T06:01:24Z) - A Better Bound Gives a Hundred Rounds: Enhanced Privacy Guarantees via
$f$-Divergences [14.008231249756678]
Our result is based on the joint range of two $f-divergences that underlie the approximate and the R'enyi variations of differential privacy.
When compared to the state-of-the-art, our bounds may lead to about 100 more gradient descent iterations for training deep learning models for the same privacy budget.
arXiv Detail & Related papers (2020-01-16T18:45:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.