Differentially Private Image Classification by Learning Priors from
Random Processes
- URL: http://arxiv.org/abs/2306.06076v2
- Date: Tue, 31 Oct 2023 17:49:45 GMT
- Title: Differentially Private Image Classification by Learning Priors from
Random Processes
- Authors: Xinyu Tang, Ashwinee Panda, Vikash Sehwag, Prateek Mittal
- Abstract summary: In privacy-preserving machine learning, differentially private gradient descent (DP-SGD) performs worse than SGD due to per-sample gradient clipping and noise addition.
A recent focus in private learning research is improving the performance of DP-SGD on private data by incorporating priors that are learned on real-world public data.
In this work, we explore how we can improve the privacy-utility tradeoff of DP-SGD by learning priors from images generated by random processes and transferring these priors to private data.
- Score: 48.0766422536737
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In privacy-preserving machine learning, differentially private stochastic
gradient descent (DP-SGD) performs worse than SGD due to per-sample gradient
clipping and noise addition. A recent focus in private learning research is
improving the performance of DP-SGD on private data by incorporating priors
that are learned on real-world public data. In this work, we explore how we can
improve the privacy-utility tradeoff of DP-SGD by learning priors from images
generated by random processes and transferring these priors to private data. We
propose DP-RandP, a three-phase approach. We attain new state-of-the-art
accuracy when training from scratch on CIFAR10, CIFAR100, MedMNIST and ImageNet
for a range of privacy budgets $\varepsilon \in [1, 8]$. In particular, we
improve the previous best reported accuracy on CIFAR10 from $60.6 \%$ to $72.3
\%$ for $\varepsilon=1$.
Related papers
- Private Fine-tuning of Large Language Models with Zeroth-order Optimization [51.19403058739522]
Differentially private gradient descent (DP-SGD) allows models to be trained in a privacy-preserving manner.
We introduce DP-ZO, a private fine-tuning framework for large language models by privatizing zeroth order optimization methods.
arXiv Detail & Related papers (2024-01-09T03:53:59Z) - TAN Without a Burn: Scaling Laws of DP-SGD [70.7364032297978]
Differentially Private methods for training Deep Neural Networks (DNNs) have progressed recently.
We decouple privacy analysis and experimental behavior of noisy training to explore the trade-off with minimal computational requirements.
We apply the proposed method on CIFAR-10 and ImageNet and, in particular, strongly improve the state-of-the-art on ImageNet with a +9 points gain in top-1 accuracy.
arXiv Detail & Related papers (2022-10-07T08:44:35Z) - Fine-Tuning with Differential Privacy Necessitates an Additional
Hyperparameter Search [38.83524780461911]
We show how carefully selecting the layers being fine-tuned in the pretrained neural network allows us to establish new state-of-the-art tradeoffs between privacy and accuracy.
We achieve 77.9% accuracy for $(varepsilon, delta)= (2, 10-5)$ on CIFAR-100 for a model pretrained on ImageNet.
arXiv Detail & Related papers (2022-10-05T11:32:49Z) - DP$^2$-VAE: Differentially Private Pre-trained Variational Autoencoders [26.658723213776632]
We propose DP$2$-VAE, a training mechanism for variational autoencoders (VAE) with provable DP guarantees and improved utility via emphpre-training on private data.
We conduct extensive experiments on image datasets to illustrate our superiority over baselines under various privacy budgets and evaluation metrics.
arXiv Detail & Related papers (2022-08-05T23:57:34Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Large Scale Transfer Learning for Differentially Private Image
Classification [51.10365553035979]
Differential Privacy (DP) provides a formal framework for training machine learning models with individual example level privacy.
Private training using DP-SGD protects against leakage by injecting noise into individual example gradients.
While this result is quite appealing, the computational cost of training large-scale models with DP-SGD is substantially higher than non-private training.
arXiv Detail & Related papers (2022-05-06T01:22:20Z) - Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for
Private Learning [74.73901662374921]
A differentially private model degrades the utility drastically when the model comprises a large number of trainable parameters.
We propose an algorithm emphGradient Embedding Perturbation (GEP) towards training differentially private deep models with decent accuracy.
arXiv Detail & Related papers (2021-02-25T04:29:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.