NeuralDP Differentially private neural networks by design
- URL: http://arxiv.org/abs/2107.14582v2
- Date: Mon, 2 Aug 2021 13:37:56 GMT
- Title: NeuralDP Differentially private neural networks by design
- Authors: Moritz Knolle, Dmitrii Usynin, Alexander Ziller, Marcus R. Makowski,
Daniel Rueckert, Georgios Kaissis
- Abstract summary: We propose NeuralDP, a technique for privatising activations of some layer within a neural network.
We experimentally demonstrate on two datasets that our method offers substantially improved privacy-utility trade-offs compared to DP-SGD.
- Score: 61.675604648670095
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The application of differential privacy to the training of deep neural
networks holds the promise of allowing large-scale (decentralized) use of
sensitive data while providing rigorous privacy guarantees to the individual.
The predominant approach to differentially private training of neural networks
is DP-SGD, which relies on norm-based gradient clipping as a method for
bounding sensitivity, followed by the addition of appropriately calibrated
Gaussian noise. In this work we propose NeuralDP, a technique for privatising
activations of some layer within a neural network, which by the post-processing
properties of differential privacy yields a differentially private network. We
experimentally demonstrate on two datasets (MNIST and Pediatric Pneumonia
Dataset (PPD)) that our method offers substantially improved privacy-utility
trade-offs compared to DP-SGD.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Differentially Private Neural Network Training under Hidden State Assumption [2.3371504588528635]
We present a novel approach called differentially private neural block coordinate descent (DP-SBCD) for neural networks with provable guarantees of privacy under the hidden state assumption.
arXiv Detail & Related papers (2024-07-11T07:14:40Z) - Differential Privacy Meets Neural Network Pruning [10.77469946354744]
We study the interplay between neural network pruning and differential privacy, through the two modes of parameter updates.
Our experimental results demonstrate how decreasing the parameter space improves differentially private training.
By studying two popular forms of pruning which do not rely on gradients and do not incur an additional privacy loss, we show that random selection performs on par with magnitude-based selection.
arXiv Detail & Related papers (2023-03-08T14:27:35Z) - Differentially Private Generative Adversarial Networks with Model
Inversion [6.651002556438805]
To protect sensitive data in training a Generative Adversarial Network (GAN), the standard approach is to use differentially private (DP) gradient descent method.
We propose Differentially Private Model Inversion (DPMI) method where the private data is first mapped to the latent space via a public generator.
Our approach outperforms the standard DP-GAN method based on Inception Score, Fr'echet Inception Distance, and classification accuracy under the same privacy guarantee.
arXiv Detail & Related papers (2022-01-10T02:26:26Z) - Don't Generate Me: Training Differentially Private Generative Models
with Sinkhorn Divergence [73.14373832423156]
We propose DP-Sinkhorn, a novel optimal transport-based generative method for learning data distributions from private data with differential privacy.
Unlike existing approaches for training differentially private generative models, we do not rely on adversarial objectives.
arXiv Detail & Related papers (2021-11-01T18:10:21Z) - Complex-valued Federated Learning with Differential Privacy and MRI Applications [51.34714485616763]
We introduce the complex-valued Gaussian mechanism, whose behaviour we characterise in terms of $f$-DP, $(varepsilon, delta)$-DP and R'enyi-DP.
We present novel complex-valued neural network primitives compatible with DP.
Experimentally, we showcase a proof-of-concept by training federated complex-valued neural networks with DP on a real-world task.
arXiv Detail & Related papers (2021-10-07T14:03:00Z) - Differentially private training of neural networks with Langevin
dynamics forcalibrated predictive uncertainty [58.730520380312676]
We show that differentially private gradient descent (DP-SGD) can yield poorly calibrated, overconfident deep learning models.
This represents a serious issue for safety-critical applications, e.g. in medical diagnosis.
arXiv Detail & Related papers (2021-07-09T08:14:45Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Differentially Private Deep Learning with Direct Feedback Alignment [15.410557873153833]
We propose the first differentially private method for training deep neural networks with direct feedback alignment (DFA)
DFA achieves significant gains in accuracy (often by 10-20%) compared to backprop-based differentially private training on a variety of architectures.
arXiv Detail & Related papers (2020-10-08T00:25:22Z) - On the effect of normalization layers on Differentially Private training
of deep Neural networks [19.26653302753129]
We study the effect of normalization layers on the performance of DPSGD.
We propose a novel method for integrating batch normalization with DPSGD without incurring an additional privacy loss.
arXiv Detail & Related papers (2020-06-19T01:43:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.