Gradient Sparsification Can Improve Performance of
Differentially-Private Convex Machine Learning
- URL: http://arxiv.org/abs/2011.14572v2
- Date: Tue, 1 Dec 2020 23:54:09 GMT
- Title: Gradient Sparsification Can Improve Performance of
Differentially-Private Convex Machine Learning
- Authors: Farhad Farokhi
- Abstract summary: We use gradient sparsification to reduce the adverse effect of differential privacy noise on performance of private machine learning models.
We employ compressed sensing and additive Laplace noise to evaluate differentially-private gradients.
- Score: 14.497406777219112
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We use gradient sparsification to reduce the adverse effect of differential
privacy noise on performance of private machine learning models. To this aim,
we employ compressed sensing and additive Laplace noise to evaluate
differentially-private gradients. Noisy privacy-preserving gradients are used
to perform stochastic gradient descent for training machine learning models.
Sparsification, achieved by setting the smallest gradient entries to zero, can
reduce the convergence speed of the training algorithm. However, by
sparsification and compressed sensing, the dimension of communicated gradient
and the magnitude of additive noise can be reduced. The interplay between these
effects determines whether gradient sparsification improves the performance of
differentially-private machine learning models. We investigate this
analytically in the paper. We prove that, for small privacy budgets,
compression can improve performance of privacy-preserving machine learning
models. However, for large privacy budgets, compression does not necessarily
improve the performance. Intuitively, this is because the effect of
privacy-preserving noise is minimal in large privacy budget regime and thus
improvements from gradient sparsification cannot compensate for its slower
convergence.
Related papers
- Enhancing DP-SGD through Non-monotonous Adaptive Scaling Gradient Weight [15.139854970044075]
We introduce Differentially Private Per-sample Adaptive Scaling Clipping (DP-PSASC)
This approach replaces traditional clipping with non-monotonous adaptive gradient scaling.
Our theoretical and empirical analyses confirm that DP-PSASC preserves gradient privacy and delivers superior performance across diverse datasets.
arXiv Detail & Related papers (2024-11-05T12:47:30Z) - DiSK: Differentially Private Optimizer with Simplified Kalman Filter for Noise Reduction [57.83978915843095]
This paper introduces DiSK, a novel framework designed to significantly enhance the performance of differentially private gradients.
To ensure practicality for large-scale training, we simplify the Kalman filtering process, minimizing its memory and computational demands.
arXiv Detail & Related papers (2024-10-04T19:30:39Z) - Certified Machine Unlearning via Noisy Stochastic Gradient Descent [20.546589699647416]
Machine unlearning aims to efficiently remove the effect of certain data points on the trained model.
We propose to leverage noisy gradient descent for unlearning and establish its first approximate unlearning guarantee.
arXiv Detail & Related papers (2024-03-25T18:43:58Z) - Sparsity-Preserving Differentially Private Training of Large Embedding
Models [67.29926605156788]
DP-SGD is a training algorithm that combines differential privacy with gradient descent.
Applying DP-SGD naively to embedding models can destroy gradient sparsity, leading to reduced training efficiency.
We present two new algorithms, DP-FEST and DP-AdaFEST, that preserve gradient sparsity during private training of large embedding models.
arXiv Detail & Related papers (2023-11-14T17:59:51Z) - Differentially Private Sharpness-Aware Training [5.488902352630076]
Training deep learning models with differential privacy (DP) results in a degradation of performance.
We show that flat minima can help reduce the negative effects of per-example gradient clipping.
We propose a new sharpness-aware training method that mitigates the privacy-optimization trade-off.
arXiv Detail & Related papers (2023-06-09T03:37:27Z) - On the Interaction Between Differential Privacy and Gradient Compression
in Deep Learning [55.22219308265945]
We study how the Gaussian mechanism for differential privacy and gradient compression jointly impact test accuracy in deep learning.
We observe while gradient compression generally has a negative impact on test accuracy in non-private training, it can sometimes improve test accuracy in differentially private training.
arXiv Detail & Related papers (2022-11-01T20:28:45Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Gradient Leakage Attack Resilient Deep Learning [7.893378392969824]
gradient leakage attacks are considered one of the wickedest privacy threats in deep learning.
Deep learning with differential privacy is a defacto standard for publishing deep learning models with differential privacy guarantee.
This paper investigates alternative approaches to gradient leakage resilient deep learning with differential privacy.
arXiv Detail & Related papers (2021-12-25T03:33:02Z) - Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for
Private Learning [74.73901662374921]
A differentially private model degrades the utility drastically when the model comprises a large number of trainable parameters.
We propose an algorithm emphGradient Embedding Perturbation (GEP) towards training differentially private deep models with decent accuracy.
arXiv Detail & Related papers (2021-02-25T04:29:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.