RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network
- URL: http://arxiv.org/abs/2007.02056v1
- Date: Sat, 4 Jul 2020 09:51:02 GMT
- Title: RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network
- Authors: Chuan Ma, Jun Li, Ming Ding, Bo Liu, Kang Wei, Jian Weng and H.
Vincent Poor
- Abstract summary: Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
However, when GANs are applied on sensitive or private training examples, such as medical or financial records, it is still probable to divulge individuals' sensitive and private information.
We propose a R'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random noises on the value of the loss function during training.
- Score: 75.81653258081435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial network (GAN) has attracted increasing attention
recently owing to its impressive ability to generate realistic samples with
high privacy protection. Without directly interactive with training examples,
the generative model can be fully used to estimate the underlying distribution
of an original dataset while the discriminative model can examine the quality
of the generated samples by comparing the label values with the training
examples. However, when GANs are applied on sensitive or private training
examples, such as medical or financial records, it is still probable to divulge
individuals' sensitive and private information. To mitigate this information
leakage and construct a private GAN, in this work we propose a
R\'enyi-differentially private-GAN (RDP-GAN), which achieves differential
privacy (DP) in a GAN by carefully adding random noises on the value of the
loss function during training. Moreover, we derive the analytical results of
the total privacy loss under the subsampling method and cumulated iterations,
which show its effectiveness on the privacy budget allocation. In addition, in
order to mitigate the negative impact brought by the injecting noise, we
enhance the proposed algorithm by adding an adaptive noise tuning step, which
will change the volume of added noise according to the testing accuracy.
Through extensive experimental results, we verify that the proposed algorithm
can achieve a better privacy level while producing high-quality samples
compared with a benchmark DP-GAN scheme based on noise perturbation on training
gradients.
Related papers
- Rethinking Improved Privacy-Utility Trade-off with Pre-existing Knowledge for DP Training [31.559864332056648]
We propose a generic differential privacy framework with heterogeneous noise (DP-Hero)
Atop DP-Hero, we instantiate a heterogeneous version of DP-SGD, where the noise injected into gradient updates is heterogeneous and guided by prior-established model parameters.
We conduct comprehensive experiments to verify and explain the effectiveness of the proposed DP-Hero, showing improved training accuracy compared with state-of-the-art works.
arXiv Detail & Related papers (2024-09-05T08:40:54Z) - ROPO: Robust Preference Optimization for Large Language Models [59.10763211091664]
We propose an iterative alignment approach that integrates noise-tolerance and filtering of noisy samples without the aid of external models.
Experiments on three widely-used datasets with Mistral-7B and Llama-2-7B demonstrate that ROPO significantly outperforms existing preference alignment methods.
arXiv Detail & Related papers (2024-04-05T13:58:51Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Evaluating the Impact of Local Differential Privacy on Utility Loss via
Influence Functions [11.504012974208466]
We demonstrate the ability of influence functions to offer insight into how a specific privacy parameter value will affect a model's test loss.
Our proposed method allows a data curator to select the privacy parameter best aligned with their allowed privacy-utility trade-off.
arXiv Detail & Related papers (2023-09-15T18:08:24Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - Optimizing the Noise in Self-Supervised Learning: from Importance
Sampling to Noise-Contrastive Estimation [80.07065346699005]
It is widely assumed that the optimal noise distribution should be made equal to the data distribution, as in Generative Adversarial Networks (GANs)
We turn to Noise-Contrastive Estimation which grounds this self-supervised task as an estimation problem of an energy-based model of the data.
We soberly conclude that the optimal noise may be hard to sample from, and the gain in efficiency can be modest compared to choosing the noise distribution equal to the data's.
arXiv Detail & Related papers (2023-01-23T19:57:58Z) - Generative Models with Information-Theoretic Protection Against
Membership Inference Attacks [6.840474688871695]
Deep generative models, such as Generative Adversarial Networks (GANs), synthesize diverse high-fidelity data samples.
GANs may disclose private information from the data they are trained on, making them susceptible to adversarial attacks.
We propose an information theoretically motivated regularization term that prevents the generative model from overfitting to training data and encourages generalizability.
arXiv Detail & Related papers (2022-05-31T19:29:55Z) - A Differentially Private Framework for Deep Learning with Convexified
Loss Functions [4.059849656394191]
Differential privacy (DP) has been applied in deep learning for preserving privacy of the underlying training sets.
Existing DP practice falls into three categories - objective perturbation, gradient perturbation and output perturbation.
We propose a novel output perturbation framework by injecting DP noise into a randomly sampled neuron.
arXiv Detail & Related papers (2022-04-03T11:10:05Z) - Differentially Private Generative Adversarial Networks with Model
Inversion [6.651002556438805]
To protect sensitive data in training a Generative Adversarial Network (GAN), the standard approach is to use differentially private (DP) gradient descent method.
We propose Differentially Private Model Inversion (DPMI) method where the private data is first mapped to the latent space via a public generator.
Our approach outperforms the standard DP-GAN method based on Inception Score, Fr'echet Inception Distance, and classification accuracy under the same privacy guarantee.
arXiv Detail & Related papers (2022-01-10T02:26:26Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.