Differentially Private Variational Autoencoders with Term-wise Gradient
Aggregation
- URL: http://arxiv.org/abs/2006.11204v1
- Date: Fri, 19 Jun 2020 16:12:28 GMT
- Title: Differentially Private Variational Autoencoders with Term-wise Gradient
Aggregation
- Authors: Tsubasa Takahashi, Shun Takagi, Hajime Ono, Tatsuya Komatsu
- Abstract summary: We study how to learn variational autoencoders with a variety of divergences under differential privacy constraints.
We propose term-wise DP-SGD that crafts randomized gradients in two different ways tailored to the compositions of the loss terms.
- Score: 12.880889651679094
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies how to learn variational autoencoders with a variety of
divergences under differential privacy constraints. We often build a VAE with
an appropriate prior distribution to describe the desired properties of the
learned representations and introduce a divergence as a regularization term to
close the representations to the prior. Using differentially private SGD
(DP-SGD), which randomizes a stochastic gradient by injecting a dedicated noise
designed according to the gradient's sensitivity, we can easily build a
differentially private model. However, we reveal that attaching several
divergences increase the sensitivity from O(1) to O(B) in terms of batch size
B. That results in injecting a vast amount of noise that makes it hard to
learn. To solve the above issue, we propose term-wise DP-SGD that crafts
randomized gradients in two different ways tailored to the compositions of the
loss terms. The term-wise DP-SGD keeps the sensitivity at O(1) even when
attaching the divergence. We can therefore reduce the amount of noise. In our
experiments, we demonstrate that our method works well with two pairs of the
prior distribution and the divergence.
Related papers
- Differentially Private Gradient Flow based on the Sliced Wasserstein Distance [59.1056830438845]
We introduce a novel differentially private generative modeling approach based on a gradient flow in the space of probability measures.
Experiments show that our proposed model can generate higher-fidelity data at a low privacy budget.
arXiv Detail & Related papers (2023-12-13T15:47:30Z) - Differentially Private SGD Without Clipping Bias: An Error-Feedback Approach [62.000948039914135]
Using Differentially Private Gradient Descent with Gradient Clipping (DPSGD-GC) to ensure Differential Privacy (DP) comes at the cost of model performance degradation.
We propose a new error-feedback (EF) DP algorithm as an alternative to DPSGD-GC.
We establish an algorithm-specific DP analysis for our proposed algorithm, providing privacy guarantees based on R'enyi DP.
arXiv Detail & Related papers (2023-11-24T17:56:44Z) - DP-SGD for non-decomposable objective functions [0.0]
We develop a new variant for similarity based loss functions that manipulates gradients of the objective function in a novel way to obtain a senstivity of the summed gradient that is $O(1)$ for batch size $n$.
Our method's performance comes close to that of a non-private model and generally outperforms DP-SGD applied directly to the contrastive loss.
arXiv Detail & Related papers (2023-10-04T18:48:16Z) - DPVIm: Differentially Private Variational Inference Improved [13.761202518891329]
Differentially private (DP) release of multidimensional statistics typically considers an aggregate sensitivity.
Different dimensions of that vector might have widely different magnitudes and therefore DP perturbation disproportionately affects the signal across dimensions.
We observe this problem in the gradient release of the DP-SGD algorithm when using it for variational inference (VI)
arXiv Detail & Related papers (2022-10-28T07:41:32Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - Improving Differentially Private SGD via Randomly Sparsified Gradients [31.295035726077366]
Differentially private gradient observation (DP-SGD) has been widely adopted in deep learning to provide rigorously defined privacy bound compression.
We propose an and utilize RS to strengthen communication cost and strengthen privacy bound compression.
arXiv Detail & Related papers (2021-12-01T21:43:34Z) - On the Double Descent of Random Features Models Trained with SGD [78.0918823643911]
We study properties of random features (RF) regression in high dimensions optimized by gradient descent (SGD)
We derive precise non-asymptotic error bounds of RF regression under both constant and adaptive step-size SGD setting.
We observe the double descent phenomenon both theoretically and empirically.
arXiv Detail & Related papers (2021-10-13T17:47:39Z) - On the Practicality of Differential Privacy in Federated Learning by
Tuning Iteration Times [51.61278695776151]
Federated Learning (FL) is well known for its privacy protection when training machine learning models among distributed clients collaboratively.
Recent studies have pointed out that the naive FL is susceptible to gradient leakage attacks.
Differential Privacy (DP) emerges as a promising countermeasure to defend against gradient leakage attacks.
arXiv Detail & Related papers (2021-01-11T19:43:12Z) - Understanding Gradient Clipping in Private SGD: A Geometric Perspective [68.61254575987013]
Deep learning models are increasingly popular in many machine learning applications where the training data may contain sensitive information.
Many learning systems now incorporate differential privacy by training their models with (differentially) private SGD.
A key step in each private SGD update is gradient clipping that shrinks the gradient of an individual example whenever its L2 norm exceeds some threshold.
arXiv Detail & Related papers (2020-06-27T19:08:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.