Differentially Private Generation of Small Images
- URL: http://arxiv.org/abs/2005.00783v2
- Date: Wed, 6 May 2020 09:38:10 GMT
- Title: Differentially Private Generation of Small Images
- Authors: Justus T. C. Schwabedal and Pascal Michel and Mario S. Riontino
- Abstract summary: We numerically measure the privacy-utility trade-off using parameters from $epsilon$-$delta$ differential privacy and the inception score.
Our experiments uncover a saturated training regime where an increasing privacy budget adds little to the quality of generated images.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore the training of generative adversarial networks with differential
privacy to anonymize image data sets. On MNIST, we numerically measure the
privacy-utility trade-off using parameters from $\epsilon$-$\delta$
differential privacy and the inception score. Our experiments uncover a
saturated training regime where an increasing privacy budget adds little to the
quality of generated images. We also explain analytically why differentially
private Adam optimization is independent of the gradient clipping parameter.
Furthermore, we highlight common errors in previous works on differentially
private deep learning, which we uncovered in recent literature. Throughout the
treatment of the subject, we hope to prevent erroneous estimates of anonymity
in the future.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Training Private Models That Know What They Don't Know [40.19666295972155]
We find that several popular selective prediction approaches are ineffective in a differentially private setting.
We propose a novel evaluation mechanism which isolate selective prediction performance across model utility levels.
arXiv Detail & Related papers (2023-05-28T12:20:07Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Privacy Enhancement for Cloud-Based Few-Shot Learning [4.1579007112499315]
We study the privacy enhancement for the few-shot learning in an untrusted environment, e.g., the cloud.
We propose a method that learns privacy-preserved representation through the joint loss.
The empirical results show how privacy-performance trade-off can be negotiated for privacy-enhanced few-shot learning.
arXiv Detail & Related papers (2022-05-10T18:48:13Z) - Assessing Differentially Private Variational Autoencoders under
Membership Inference [26.480694390462617]
We quantify and compare the privacy-accuracy trade-off for differentially private Variational Autoencoders.
We do rarely observe favorable privacy-accuracy trade-off for Variational Autoencoders, and identify a case where LDP outperforms CDP.
arXiv Detail & Related papers (2022-04-16T21:53:09Z) - Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for
Private Learning [74.73901662374921]
A differentially private model degrades the utility drastically when the model comprises a large number of trainable parameters.
We propose an algorithm emphGradient Embedding Perturbation (GEP) towards training differentially private deep models with decent accuracy.
arXiv Detail & Related papers (2021-02-25T04:29:58Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - Model Explanations with Differential Privacy [21.15017895170093]
Black-box machine learning models are used in critical decision-making domains.
Model explanations can leak information about the training data and the explanation data used to generate them.
We propose differentially private algorithms to construct feature-based model explanations.
arXiv Detail & Related papers (2020-06-16T13:18:02Z) - Auditing Differentially Private Machine Learning: How Private is Private
SGD? [16.812900569416062]
We investigate whether Differentially Private SGD offers better privacy in practice than what is guaranteed by its state-of-the-art analysis.
We do so via novel data poisoning attacks, which we show correspond to realistic privacy attacks.
arXiv Detail & Related papers (2020-06-13T20:00:18Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.