Spectral-DP: Differentially Private Deep Learning through Spectral
Perturbation and Filtering
- URL: http://arxiv.org/abs/2307.13231v1
- Date: Tue, 25 Jul 2023 03:45:56 GMT
- Title: Spectral-DP: Differentially Private Deep Learning through Spectral
Perturbation and Filtering
- Authors: Ce Feng, Nuo Xu, Wujie Wen, Parv Venkitasubramaniam, Caiwen Ding
- Abstract summary: We present Spectral-DP, a new differentially private learning approach which combines gradient perturbation in the spectral domain with spectral filtering to achieve a desired privacy guarantee.
We develop differentially private deep learning methods based on Spectral-DP for architectures that contain both convolution and fully connected layers.
In comparison with state-of-the-art DP-SGD based approaches, Spectral-DP is shown to have uniformly better utility performance in both training from scratch and transfer learning settings.
- Score: 13.924503289749035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differential privacy is a widely accepted measure of privacy in the context
of deep learning algorithms, and achieving it relies on a noisy training
approach known as differentially private stochastic gradient descent (DP-SGD).
DP-SGD requires direct noise addition to every gradient in a dense neural
network, the privacy is achieved at a significant utility cost. In this work,
we present Spectral-DP, a new differentially private learning approach which
combines gradient perturbation in the spectral domain with spectral filtering
to achieve a desired privacy guarantee with a lower noise scale and thus better
utility. We develop differentially private deep learning methods based on
Spectral-DP for architectures that contain both convolution and fully connected
layers. In particular, for fully connected layers, we combine a block-circulant
based spatial restructuring with Spectral-DP to achieve better utility. Through
comprehensive experiments, we study and provide guidelines to implement
Spectral-DP deep learning on benchmark datasets. In comparison with
state-of-the-art DP-SGD based approaches, Spectral-DP is shown to have
uniformly better utility performance in both training from scratch and transfer
learning settings.
Related papers
- DiSK: Differentially Private Optimizer with Simplified Kalman Filter for Noise Reduction [57.83978915843095]
This paper introduces DiSK, a novel framework designed to significantly enhance the performance of differentially private gradients.
To ensure practicality for large-scale training, we simplify the Kalman filtering process, minimizing its memory and computational demands.
arXiv Detail & Related papers (2024-10-04T19:30:39Z) - Differentially Private Block-wise Gradient Shuffle for Deep Learning [0.0]
This paper introduces the novel Differentially Private Block-wise Gradient Shuffle (DP-BloGS) algorithm for deep learning.
DP-BloGS builds off of existing private deep learning literature, but makes a definitive shift by taking a probabilistic approach to gradient noise introduction.
It is found to be significantly more resistant to data extraction attempts than DP-SGD.
arXiv Detail & Related papers (2024-07-31T05:32:37Z) - Differentially Private SGD Without Clipping Bias: An Error-Feedback Approach [62.000948039914135]
Using Differentially Private Gradient Descent with Gradient Clipping (DPSGD-GC) to ensure Differential Privacy (DP) comes at the cost of model performance degradation.
We propose a new error-feedback (EF) DP algorithm as an alternative to DPSGD-GC.
We establish an algorithm-specific DP analysis for our proposed algorithm, providing privacy guarantees based on R'enyi DP.
arXiv Detail & Related papers (2023-11-24T17:56:44Z) - Directional Privacy for Deep Learning [2.826489388853448]
Differentially Private Gradient Descent (DP-SGD) is a key method for applying privacy in the training of deep learning models.
Metric DP, however, can provide alternative mechanisms based on arbitrary metrics that might be more suitable for preserving utility.
We show that this provides both $epsilon$-DP and $epsilon d$-privacy for deep learning training, rather than the $(epsilon, delta)$-privacy of the Gaussian mechanism.
arXiv Detail & Related papers (2022-11-09T05:18:08Z) - Don't Generate Me: Training Differentially Private Generative Models
with Sinkhorn Divergence [73.14373832423156]
We propose DP-Sinkhorn, a novel optimal transport-based generative method for learning data distributions from private data with differential privacy.
Unlike existing approaches for training differentially private generative models, we do not rely on adversarial objectives.
arXiv Detail & Related papers (2021-11-01T18:10:21Z) - Differentially Private Federated Bayesian Optimization with Distributed
Exploration [48.9049546219643]
We introduce differential privacy (DP) into the training of deep neural networks through a general framework for adding DP to iterative algorithms.
We show that DP-FTS-DE achieves high utility (competitive performance) with a strong privacy guarantee.
We also use real-world experiments to show that DP-FTS-DE induces a trade-off between privacy and utility.
arXiv Detail & Related papers (2021-10-27T04:11:06Z) - NeuralDP Differentially private neural networks by design [61.675604648670095]
We propose NeuralDP, a technique for privatising activations of some layer within a neural network.
We experimentally demonstrate on two datasets that our method offers substantially improved privacy-utility trade-offs compared to DP-SGD.
arXiv Detail & Related papers (2021-07-30T12:40:19Z) - Practical and Private (Deep) Learning without Sampling or Shuffling [24.29228578925639]
We consider training models with differential privacy using mini-batch gradients.
We analyze a DP variant of Follow-The-Regularized-Leader (DP-FTRL) that compares favorably to amplified DP-SGD.
arXiv Detail & Related papers (2021-02-26T20:16:26Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.