On the Interaction Between Differential Privacy and Gradient Compression
in Deep Learning
- URL: http://arxiv.org/abs/2211.00734v1
- Date: Tue, 1 Nov 2022 20:28:45 GMT
- Title: On the Interaction Between Differential Privacy and Gradient Compression
in Deep Learning
- Authors: Jimmy Lin
- Abstract summary: We study how the Gaussian mechanism for differential privacy and gradient compression jointly impact test accuracy in deep learning.
We observe while gradient compression generally has a negative impact on test accuracy in non-private training, it can sometimes improve test accuracy in differentially private training.
- Score: 55.22219308265945
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While differential privacy and gradient compression are separately
well-researched topics in machine learning, the study of interaction between
these two topics is still relatively new. We perform a detailed empirical study
on how the Gaussian mechanism for differential privacy and gradient compression
jointly impact test accuracy in deep learning. The existing literature in
gradient compression mostly evaluates compression in the absence of
differential privacy guarantees, and demonstrate that sufficiently high
compression rates reduce accuracy. Similarly, existing literature in
differential privacy evaluates privacy mechanisms in the absence of
compression, and demonstrates that sufficiently strong privacy guarantees
reduce accuracy. In this work, we observe while gradient compression generally
has a negative impact on test accuracy in non-private training, it can
sometimes improve test accuracy in differentially private training.
Specifically, we observe that when employing aggressive sparsification or rank
reduction to the gradients, test accuracy is less affected by the Gaussian
noise added for differential privacy. These observations are explained through
an analysis how differential privacy and compression effects the bias and
variance in estimating the average gradient. We follow this study with a
recommendation on how to improve test accuracy under the context of
differentially private deep learning and gradient compression. We evaluate this
proposal and find that it can reduce the negative impact of noise added by
differential privacy mechanisms on test accuracy by up to 24.6%, and reduce the
negative impact of gradient sparsification on test accuracy by up to 15.1%.
Related papers
- Privacy at a Price: Exploring its Dual Impact on AI Fairness [24.650648702853903]
We show that differential privacy in machine learning models can unequally impact separate demographic subgroups regarding prediction accuracy.
This leads to a fairness concern, and manifests as biased performance.
implementing gradient clipping in the differentially private gradient descent ML method can mitigate the negative impact of DP noise on fairness.
arXiv Detail & Related papers (2024-04-15T00:23:41Z) - Causal Inference with Differentially Private (Clustered) Outcomes [16.166525280886578]
Estimating causal effects from randomized experiments is only feasible if participants agree to reveal their responses.
We suggest a new differential privacy mechanism, Cluster-DP, which leverages any given cluster structure.
We show that, depending on an intuitive measure of cluster quality, we can improve the variance loss while maintaining our privacy guarantees.
arXiv Detail & Related papers (2023-08-02T05:51:57Z) - SA-DPSGD: Differentially Private Stochastic Gradient Descent based on
Simulated Annealing [25.25065807901922]
Differentially private gradient descent is the most popular training method with differential privacy in image recognition.
Existing DPSGD schemes lead to significant performance degradation, which prevents the application of differential privacy.
We propose a simulated annealing-based differentially private gradient descent scheme (SA-DPSGD) which accepts a candidate update with a probability that depends on the update quality and on the number of iterations.
arXiv Detail & Related papers (2022-11-14T09:20:48Z) - Gradient Leakage Attack Resilient Deep Learning [7.893378392969824]
gradient leakage attacks are considered one of the wickedest privacy threats in deep learning.
Deep learning with differential privacy is a defacto standard for publishing deep learning models with differential privacy guarantee.
This paper investigates alternative approaches to gradient leakage resilient deep learning with differential privacy.
arXiv Detail & Related papers (2021-12-25T03:33:02Z) - Adaptive Differentially Private Empirical Risk Minimization [95.04948014513226]
We propose an adaptive (stochastic) gradient perturbation method for differentially private empirical risk minimization.
We prove that the ADP method considerably improves the utility guarantee compared to the standard differentially private method in which vanilla random noise is added.
arXiv Detail & Related papers (2021-10-14T15:02:20Z) - Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for
Private Learning [74.73901662374921]
A differentially private model degrades the utility drastically when the model comprises a large number of trainable parameters.
We propose an algorithm emphGradient Embedding Perturbation (GEP) towards training differentially private deep models with decent accuracy.
arXiv Detail & Related papers (2021-02-25T04:29:58Z) - On Deep Learning with Label Differential Privacy [54.45348348861426]
We study the multi-class classification setting where the labels are considered sensitive and ought to be protected.
We propose a new algorithm for training deep neural networks with label differential privacy, and run evaluations on several datasets.
arXiv Detail & Related papers (2021-02-11T15:09:06Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Gradient Sparsification Can Improve Performance of
Differentially-Private Convex Machine Learning [14.497406777219112]
We use gradient sparsification to reduce the adverse effect of differential privacy noise on performance of private machine learning models.
We employ compressed sensing and additive Laplace noise to evaluate differentially-private gradients.
arXiv Detail & Related papers (2020-11-30T06:37:06Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.