Differential Privacy Dynamics of Langevin Diffusion and Noisy Gradient
Descent
- URL: http://arxiv.org/abs/2102.05855v1
- Date: Thu, 11 Feb 2021 05:49:37 GMT
- Title: Differential Privacy Dynamics of Langevin Diffusion and Noisy Gradient
Descent
- Authors: Rishav Chourasia, Jiayuan Ye, Reza Shokri
- Abstract summary: We model the dynamics of privacy loss in Langevin diffusion and extend it to the noisy gradient descent algorithm.
We prove that the privacy loss converges exponentially fast.
- Score: 10.409652277630132
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We model the dynamics of privacy loss in Langevin diffusion and extend it to
the noisy gradient descent algorithm: we compute a tight bound on R\'enyi
differential privacy and the rate of its change throughout the learning
process. We prove that the privacy loss converges exponentially fast. This
significantly improves the prior privacy analysis of differentially private
(stochastic) gradient descent algorithms, where (R\'enyi) privacy loss
constantly increases over the training iterations. Unlike composition-based
methods in differential privacy, our privacy analysis does not assume that the
noisy gradients (or parameters) during the training could be revealed to the
adversary. Our analysis tracks the dynamics of privacy loss through the
algorithm's intermediate parameter distributions, thus allowing us to account
for privacy amplification due to convergence. We prove that our privacy
analysis is tight, and also provide a utility analysis for strongly convex,
smooth and Lipshitz loss functions.
Related papers
- Shifted Interpolation for Differential Privacy [6.1836947007564085]
Noisy gradient descent and its variants are the predominant algorithms for differentially private machine learning.
This paper establishes the "privacy amplification by corollary" phenomenon in the unifying framework of $f$-differential privacy.
Notably, this leads to the first exact privacy analysis in the foundational setting of strongly convex optimization.
arXiv Detail & Related papers (2024-03-01T04:50:04Z) - Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - Privacy Loss of Noisy Stochastic Gradient Descent Might Converge Even
for Non-Convex Losses [4.68299658663016]
The Noisy-SGD algorithm is widely used for privately training machine learning models.
Recent findings have shown that if the internal state remains hidden, then the privacy loss might remain bounded.
We address this problem for DP-SGD, a popular variant of Noisy-SGD that incorporates gradient clipping to limit the impact of individual samples on the training process.
arXiv Detail & Related papers (2023-05-17T02:25:56Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Privacy of Noisy Stochastic Gradient Descent: More Iterations without
More Privacy Loss [34.66940399825547]
Industry has widely adopted a simple algorithm: Gradient Descent with noise (a.k.a. Gradient Langevin Dynamics)
Questions about this algorithm's privacy loss remain open -- even in the seemingly simple setting of smooth convex losses over a bounded domain.
We characterize the differential privacy up to a constant factor and show that after a small burn-in period, running SGD longer leaks no further privacy.
arXiv Detail & Related papers (2022-05-27T02:09:55Z) - Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for
Private Learning [74.73901662374921]
A differentially private model degrades the utility drastically when the model comprises a large number of trainable parameters.
We propose an algorithm emphGradient Embedding Perturbation (GEP) towards training differentially private deep models with decent accuracy.
arXiv Detail & Related papers (2021-02-25T04:29:58Z) - On the Differentially Private Nature of Perturbed Gradient Descent [15.554148012395457]
We consider the problem of empirical minimization given a database, using the gradient descent algorithm.
A gradient descent algorithm is typically employed to escape the differential saddle points.
arXiv Detail & Related papers (2021-01-18T02:29:37Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.