Differentially Private Deep Learning with Direct Feedback Alignment
- URL: http://arxiv.org/abs/2010.03701v1
- Date: Thu, 8 Oct 2020 00:25:22 GMT
- Title: Differentially Private Deep Learning with Direct Feedback Alignment
- Authors: Jaewoo Lee and Daniel Kifer
- Abstract summary: We propose the first differentially private method for training deep neural networks with direct feedback alignment (DFA)
DFA achieves significant gains in accuracy (often by 10-20%) compared to backprop-based differentially private training on a variety of architectures.
- Score: 15.410557873153833
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Standard methods for differentially private training of deep neural networks
replace back-propagated mini-batch gradients with biased and noisy
approximations to the gradient. These modifications to training often result in
a privacy-preserving model that is significantly less accurate than its
non-private counterpart. We hypothesize that alternative training algorithms
may be more amenable to differential privacy. Specifically, we examine the
suitability of direct feedback alignment (DFA). We propose the first
differentially private method for training deep neural networks with DFA and
show that it achieves significant gains in accuracy (often by 10-20%) compared
to backprop-based differentially private training on a variety of architectures
(fully connected, convolutional) and datasets.
Related papers
- DP-SGD with weight clipping [1.0878040851638]
We present a novel approach that mitigates the bias arising from traditional gradient clipping.
By leveraging a public upper bound of the Lipschitz value of the current model and its current location within the search domain, we can achieve refined noise level adjustments.
arXiv Detail & Related papers (2023-10-27T09:17:15Z) - Adap DP-FL: Differentially Private Federated Learning with Adaptive
Noise [30.005017338416327]
Federated learning seeks to address the issue of isolated data islands by making clients disclose only their local training models.
Recently, differential privacy has been applied to federated learning to protect data privacy, but the noise added may degrade the learning performance much.
We propose a differentially private scheme for federated learning with adaptive noise (Adap DP-FL)
arXiv Detail & Related papers (2022-11-29T03:20:40Z) - Fine-Tuning with Differential Privacy Necessitates an Additional
Hyperparameter Search [38.83524780461911]
We show how carefully selecting the layers being fine-tuned in the pretrained neural network allows us to establish new state-of-the-art tradeoffs between privacy and accuracy.
We achieve 77.9% accuracy for $(varepsilon, delta)= (2, 10-5)$ on CIFAR-100 for a model pretrained on ImageNet.
arXiv Detail & Related papers (2022-10-05T11:32:49Z) - Large Scale Transfer Learning for Differentially Private Image
Classification [51.10365553035979]
Differential Privacy (DP) provides a formal framework for training machine learning models with individual example level privacy.
Private training using DP-SGD protects against leakage by injecting noise into individual example gradients.
While this result is quite appealing, the computational cost of training large-scale models with DP-SGD is substantially higher than non-private training.
arXiv Detail & Related papers (2022-05-06T01:22:20Z) - Don't Generate Me: Training Differentially Private Generative Models
with Sinkhorn Divergence [73.14373832423156]
We propose DP-Sinkhorn, a novel optimal transport-based generative method for learning data distributions from private data with differential privacy.
Unlike existing approaches for training differentially private generative models, we do not rely on adversarial objectives.
arXiv Detail & Related papers (2021-11-01T18:10:21Z) - Adaptive Differentially Private Empirical Risk Minimization [95.04948014513226]
We propose an adaptive (stochastic) gradient perturbation method for differentially private empirical risk minimization.
We prove that the ADP method considerably improves the utility guarantee compared to the standard differentially private method in which vanilla random noise is added.
arXiv Detail & Related papers (2021-10-14T15:02:20Z) - NeuralDP Differentially private neural networks by design [61.675604648670095]
We propose NeuralDP, a technique for privatising activations of some layer within a neural network.
We experimentally demonstrate on two datasets that our method offers substantially improved privacy-utility trade-offs compared to DP-SGD.
arXiv Detail & Related papers (2021-07-30T12:40:19Z) - Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for
Private Learning [74.73901662374921]
A differentially private model degrades the utility drastically when the model comprises a large number of trainable parameters.
We propose an algorithm emphGradient Embedding Perturbation (GEP) towards training differentially private deep models with decent accuracy.
arXiv Detail & Related papers (2021-02-25T04:29:58Z) - On Deep Learning with Label Differential Privacy [54.45348348861426]
We study the multi-class classification setting where the labels are considered sensitive and ought to be protected.
We propose a new algorithm for training deep neural networks with label differential privacy, and run evaluations on several datasets.
arXiv Detail & Related papers (2021-02-11T15:09:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.