A Better Bound Gives a Hundred Rounds: Enhanced Privacy Guarantees via
$f$-Divergences
- URL: http://arxiv.org/abs/2001.05990v1
- Date: Thu, 16 Jan 2020 18:45:05 GMT
- Title: A Better Bound Gives a Hundred Rounds: Enhanced Privacy Guarantees via
$f$-Divergences
- Authors: Shahab Asoodeh, Jiachun Liao, Flavio P. Calmon, Oliver Kosut, Lalitha
Sankar
- Abstract summary: Our result is based on the joint range of two $f-divergences that underlie the approximate and the R'enyi variations of differential privacy.
When compared to the state-of-the-art, our bounds may lead to about 100 more gradient descent iterations for training deep learning models for the same privacy budget.
- Score: 14.008231249756678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We derive the optimal differential privacy (DP) parameters of a mechanism
that satisfies a given level of R\'enyi differential privacy (RDP). Our result
is based on the joint range of two $f$-divergences that underlie the
approximate and the R\'enyi variations of differential privacy. We apply our
result to the moments accountant framework for characterizing privacy
guarantees of stochastic gradient descent. When compared to the
state-of-the-art, our bounds may lead to about 100 more stochastic gradient
descent iterations for training deep learning models for the same privacy
budget.
Related papers
- Comparing privacy notions for protection against reconstruction attacks in machine learning [10.466570297146953]
In the machine learning community, reconstruction attacks are a principal concern and have been identified even in federated learning (FL)
In response to these threats, the privacy community recommends the use of differential privacy (DP) in the gradient descent algorithm, termed DP-SGD.
In this paper, we lay a foundational framework for comparing mechanisms with differing notions of privacy guarantees.
arXiv Detail & Related papers (2025-02-06T13:04:25Z) - Differentially Private Policy Gradient [48.748194765816955]
We show that it is possible to find the right trade-off between privacy noise and trust-region size to obtain a performant differentially private policy gradient algorithm.
Our results and the complexity of the tasks addressed represent a significant improvement over existing DP algorithms in online RL.
arXiv Detail & Related papers (2025-01-31T12:11:13Z) - Sparsity-Preserving Differentially Private Training of Large Embedding
Models [67.29926605156788]
DP-SGD is a training algorithm that combines differential privacy with gradient descent.
Applying DP-SGD naively to embedding models can destroy gradient sparsity, leading to reduced training efficiency.
We present two new algorithms, DP-FEST and DP-AdaFEST, that preserve gradient sparsity during private training of large embedding models.
arXiv Detail & Related papers (2023-11-14T17:59:51Z) - SA-DPSGD: Differentially Private Stochastic Gradient Descent based on
Simulated Annealing [25.25065807901922]
Differentially private gradient descent is the most popular training method with differential privacy in image recognition.
Existing DPSGD schemes lead to significant performance degradation, which prevents the application of differential privacy.
We propose a simulated annealing-based differentially private gradient descent scheme (SA-DPSGD) which accepts a candidate update with a probability that depends on the update quality and on the number of iterations.
arXiv Detail & Related papers (2022-11-14T09:20:48Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for
Private Learning [74.73901662374921]
A differentially private model degrades the utility drastically when the model comprises a large number of trainable parameters.
We propose an algorithm emphGradient Embedding Perturbation (GEP) towards training differentially private deep models with decent accuracy.
arXiv Detail & Related papers (2021-02-25T04:29:58Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - Three Variants of Differential Privacy: Lossless Conversion and
Applications [13.057076084452016]
We consider three different variants of differential privacy (DP), namely approximate DP, R'enyi RDP, and hypothesis test.
In the first part, we develop a machinery for relating approximate DP to iterations based on the joint range of two $f$-divergences.
As an application, we apply our result to the moments framework for characterizing privacy guarantees of noisy gradient descent.
arXiv Detail & Related papers (2020-08-14T18:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.