Not all noise is accounted equally: How differentially private learning
benefits from large sampling rates
- URL: http://arxiv.org/abs/2110.06255v1
- Date: Tue, 12 Oct 2021 18:11:31 GMT
- Title: Not all noise is accounted equally: How differentially private learning
benefits from large sampling rates
- Authors: Friedrich D\"ormann, Osvald Frisk, Lars N{\o}rvang Andersen, Christian
Fischer Pedersen
- Abstract summary: In differentially private SGD, the gradients computed at each training iteration are subject to two different types of noise.
In this study, we show that these two types of noise are equivalent in their effect on the utility of private neural networks.
We propose a training paradigm that shifts the proportions of noise towards less inherent and more additive noise.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning often involves sensitive data and as such, privacy preserving
extensions to Stochastic Gradient Descent (SGD) and other machine learning
algorithms have been developed using the definitions of Differential Privacy
(DP). In differentially private SGD, the gradients computed at each training
iteration are subject to two different types of noise. Firstly, inherent
sampling noise arising from the use of minibatches. Secondly, additive Gaussian
noise from the underlying mechanisms that introduce privacy. In this study, we
show that these two types of noise are equivalent in their effect on the
utility of private neural networks, however they are not accounted for equally
in the privacy budget. Given this observation, we propose a training paradigm
that shifts the proportions of noise towards less inherent and more additive
noise, such that more of the overall noise can be accounted for in the privacy
budget. With this paradigm, we are able to improve on the state-of-the-art in
the privacy/utility tradeoff of private end-to-end CNNs.
Related papers
- Federated Cubic Regularized Newton Learning with Sparsification-amplified Differential Privacy [10.396575601912673]
We introduce a federated learning algorithm called Differentially Private Federated Cubic Regularized Newton (DP-FCRN)
By leveraging second-order techniques, our algorithm achieves lower iteration complexity compared to first-order methods.
We also incorporate noise perturbation during local computations to ensure privacy.
arXiv Detail & Related papers (2024-08-08T08:48:54Z) - Certification for Differentially Private Prediction in Gradient-Based Training [36.686002369773014]
We use convex relaxation and bound propagation to compute a provable upper-bound for the local and smooth sensitivity of a prediction.
This bound allows us to reduce the magnitude of noise added or improve privacy accounting in the private prediction setting.
arXiv Detail & Related papers (2024-06-19T10:47:00Z) - Adaptive Differential Privacy in Federated Learning: A Priority-Based
Approach [0.0]
Federated learning (FL) develops global models without direct access to local datasets.
DP offers a framework that gives a privacy guarantee by adding certain amounts of noise to parameters.
We propose adaptive noise addition in FL which decides the value of injected noise based on features' relative importance.
arXiv Detail & Related papers (2024-01-04T03:01:15Z) - Brownian Noise Reduction: Maximizing Privacy Subject to Accuracy
Constraints [53.01656650117495]
There is a disconnect between how researchers and practitioners handle privacy-utility tradeoffs.
Brownian mechanism works by first adding Gaussian noise of high variance corresponding to the final point of a simulated Brownian motion.
We complement our Brownian mechanism with ReducedAboveThreshold, a generalization of the classical AboveThreshold algorithm.
arXiv Detail & Related papers (2022-06-15T01:43:37Z) - Mixed Differential Privacy in Computer Vision [133.68363478737058]
AdaMix is an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data.
A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset.
arXiv Detail & Related papers (2022-03-22T06:15:43Z) - The Optimal Noise in Noise-Contrastive Learning Is Not What You Think [80.07065346699005]
We show that deviating from this assumption can actually lead to better statistical estimators.
In particular, the optimal noise distribution is different from the data's and even from a different family.
arXiv Detail & Related papers (2022-03-02T13:59:20Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network [75.81653258081435]
Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
However, when GANs are applied on sensitive or private training examples, such as medical or financial records, it is still probable to divulge individuals' sensitive and private information.
We propose a R'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random noises on the value of the loss function during training.
arXiv Detail & Related papers (2020-07-04T09:51:02Z) - Differentially Private Naive Bayes Classifier using Smooth Sensitivity [0.0]
We have provided a differentially private Naive Bayes classifier that adds noise proportional to the Smooth Sensitivity of its parameters.
Our experiment results on the real-world datasets show that the accuracy of our method has improved significantly while still preserving $varepsilon$-differential privacy.
arXiv Detail & Related papers (2020-03-31T05:03:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.