Directional Privacy for Deep Learning
- URL: http://arxiv.org/abs/2211.04686v3
- Date: Mon, 27 Nov 2023 03:07:32 GMT
- Title: Directional Privacy for Deep Learning
- Authors: Pedro Faustini, Natasha Fernandes, Shakila Tonni, Annabelle McIver,
Mark Dras
- Abstract summary: Differentially Private Gradient Descent (DP-SGD) is a key method for applying privacy in the training of deep learning models.
Metric DP, however, can provide alternative mechanisms based on arbitrary metrics that might be more suitable for preserving utility.
We show that this provides both $epsilon$-DP and $epsilon d$-privacy for deep learning training, rather than the $(epsilon, delta)$-privacy of the Gaussian mechanism.
- Score: 2.826489388853448
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Differentially Private Stochastic Gradient Descent (DP-SGD) is a key method
for applying privacy in the training of deep learning models. It applies
isotropic Gaussian noise to gradients during training, which can perturb these
gradients in any direction, damaging utility. Metric DP, however, can provide
alternative mechanisms based on arbitrary metrics that might be more suitable
for preserving utility. In this paper, we apply \textit{directional privacy},
via a mechanism based on the von Mises-Fisher (VMF) distribution, to perturb
gradients in terms of \textit{angular distance} so that gradient direction is
broadly preserved. We show that this provides both $\epsilon$-DP and $\epsilon
d$-privacy for deep learning training, rather than the $(\epsilon,
\delta)$-privacy of the Gaussian mechanism. Experiments on key datasets then
indicate that the VMF mechanism can outperform the Gaussian in the
utility-privacy trade-off. In particular, our experiments provide a direct
empirical comparison of privacy between the two approaches in terms of their
ability to defend against reconstruction and membership inference.
Related papers
- Differentially Private Block-wise Gradient Shuffle for Deep Learning [0.0]
This paper introduces the novel Differentially Private Block-wise Gradient Shuffle (DP-BloGS) algorithm for deep learning.
DP-BloGS builds off of existing private deep learning literature, but makes a definitive shift by taking a probabilistic approach to gradient noise introduction.
It is found to be significantly more resistant to data extraction attempts than DP-SGD.
arXiv Detail & Related papers (2024-07-31T05:32:37Z) - Uncertainty quantification by block bootstrap for differentially private stochastic gradient descent [1.0742675209112622]
Gradient Descent (SGD) is a widely used tool in machine learning.
Uncertainty quantification (UQ) for SGD by bootstrap has been addressed by several authors.
We propose a novel block bootstrap for SGD under local differential privacy.
arXiv Detail & Related papers (2024-05-21T07:47:21Z) - Differentially Private Gradient Flow based on the Sliced Wasserstein Distance [59.1056830438845]
We introduce a novel differentially private generative modeling approach based on a gradient flow in the space of probability measures.
Experiments show that our proposed model can generate higher-fidelity data at a low privacy budget.
arXiv Detail & Related papers (2023-12-13T15:47:30Z) - Sparsity-Preserving Differentially Private Training of Large Embedding
Models [67.29926605156788]
DP-SGD is a training algorithm that combines differential privacy with gradient descent.
Applying DP-SGD naively to embedding models can destroy gradient sparsity, leading to reduced training efficiency.
We present two new algorithms, DP-FEST and DP-AdaFEST, that preserve gradient sparsity during private training of large embedding models.
arXiv Detail & Related papers (2023-11-14T17:59:51Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - Differentially Private Temporal Difference Learning with Stochastic
Nonconvex-Strongly-Concave Optimization [17.361143427007224]
temporal difference (TD) learning is a widely used method to evaluate policies in reinforcement learning.
In this paper, we consider preserving privacy in TD learning with a nonlinear value function.
We show that DPTD could provide $epsilon,n-differential privacy (DP) guarantee for sensitive information encoded in transitions and retain the original power of TD learning.
arXiv Detail & Related papers (2022-01-25T16:48:29Z) - Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for
Private Learning [74.73901662374921]
A differentially private model degrades the utility drastically when the model comprises a large number of trainable parameters.
We propose an algorithm emphGradient Embedding Perturbation (GEP) towards training differentially private deep models with decent accuracy.
arXiv Detail & Related papers (2021-02-25T04:29:58Z) - Understanding Gradient Clipping in Private SGD: A Geometric Perspective [68.61254575987013]
Deep learning models are increasingly popular in many machine learning applications where the training data may contain sensitive information.
Many learning systems now incorporate differential privacy by training their models with (differentially) private SGD.
A key step in each private SGD update is gradient clipping that shrinks the gradient of an individual example whenever its L2 norm exceeds some threshold.
arXiv Detail & Related papers (2020-06-27T19:08:12Z) - A Better Bound Gives a Hundred Rounds: Enhanced Privacy Guarantees via
$f$-Divergences [14.008231249756678]
Our result is based on the joint range of two $f-divergences that underlie the approximate and the R'enyi variations of differential privacy.
When compared to the state-of-the-art, our bounds may lead to about 100 more gradient descent iterations for training deep learning models for the same privacy budget.
arXiv Detail & Related papers (2020-01-16T18:45:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.