Membership Inference Attacks against Diffusion Models
- URL: http://arxiv.org/abs/2302.03262v2
- Date: Wed, 22 Mar 2023 14:31:42 GMT
- Title: Membership Inference Attacks against Diffusion Models
- Authors: Tomoya Matsumoto and Takayuki Miura and Naoto Yanai
- Abstract summary: Diffusion models have attracted attention in recent years as innovative generative models.
We investigate whether a diffusion model is resistant to a membership inference attack.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion models have attracted attention in recent years as innovative
generative models. In this paper, we investigate whether a diffusion model is
resistant to a membership inference attack, which evaluates the privacy leakage
of a machine learning model. We primarily discuss the diffusion model from the
standpoints of comparison with a generative adversarial network (GAN) as
conventional models and hyperparameters unique to the diffusion model, i.e.,
time steps, sampling steps, and sampling variances. We conduct extensive
experiments with DDIM as a diffusion model and DCGAN as a GAN on the CelebA and
CIFAR-10 datasets in both white-box and black-box settings and then confirm if
the diffusion model is comparably resistant to a membership inference attack as
GAN. Next, we demonstrate that the impact of time steps is significant and
intermediate steps in a noise schedule are the most vulnerable to the attack.
We also found two key insights through further analysis. First, we identify
that DDIM is vulnerable to the attack for small sample sizes instead of
achieving a lower FID. Second, sampling steps in hyperparameters are important
for resistance to the attack, whereas the impact of sampling variances is quite
limited.
Related papers
- Masked Diffusion Models are Secretly Time-Agnostic Masked Models and Exploit Inaccurate Categorical Sampling [47.82616476928464]
Masked diffusion models (MDMs) have emerged as a popular research topic for generative modeling of discrete data.
We show that both training and sampling of MDMs are theoretically free from the time variable.
We identify, for the first time, an underlying numerical issue, even with the commonly used 32-bit floating-point precision.
arXiv Detail & Related papers (2024-09-04T17:48:19Z) - Watch the Watcher! Backdoor Attacks on Security-Enhancing Diffusion Models [65.30406788716104]
This work investigates the vulnerabilities of security-enhancing diffusion models.
We demonstrate that these models are highly susceptible to DIFF2, a simple yet effective backdoor attack.
Case studies show that DIFF2 can significantly reduce both post-purification and certified accuracy across benchmark datasets and models.
arXiv Detail & Related papers (2024-06-14T02:39:43Z) - Guided Diffusion from Self-Supervised Diffusion Features [49.78673164423208]
Guidance serves as a key concept in diffusion models, yet its effectiveness is often limited by the need for extra data annotation or pretraining.
We propose a framework to extract guidance from, and specifically for, diffusion models.
arXiv Detail & Related papers (2023-12-14T11:19:11Z) - White-box Membership Inference Attacks against Diffusion Models [13.425726946466423]
Diffusion models have begun to overshadow GANs in industrial applications due to their superior image generation performance.
We aim to design membership inference attacks (MIAs) catered to diffusion models.
We first conduct an exhaustive analysis of existing MIAs on diffusion models, taking into account factors such as black-box/white-box models and the selection of attack features.
We found that white-box attacks are highly applicable in real-world scenarios, and the most effective attacks presently are white-box.
arXiv Detail & Related papers (2023-08-11T22:03:36Z) - Semi-Implicit Denoising Diffusion Models (SIDDMs) [50.30163684539586]
Existing models such as Denoising Diffusion Probabilistic Models (DDPM) deliver high-quality, diverse samples but are slowed by an inherently high number of iterative steps.
We introduce a novel approach that tackles the problem by matching implicit and explicit factors.
We demonstrate that our proposed method obtains comparable generative performance to diffusion-based models and vastly superior results to models with a small number of sampling steps.
arXiv Detail & Related papers (2023-06-21T18:49:22Z) - PriSampler: Mitigating Property Inference of Diffusion Models [6.5990719141691825]
This work systematically presents the first privacy study about property inference attacks against diffusion models.
We propose a new model-agnostic plug-in method PriSampler to infer the risks of the property inference of diffusion models.
arXiv Detail & Related papers (2023-06-08T14:05:06Z) - An Efficient Membership Inference Attack for the Diffusion Model by
Proximal Initialization [58.88327181933151]
In this paper, we propose an efficient query-based membership inference attack (MIA)
Experimental results indicate that the proposed method can achieve competitive performance with only two queries on both discrete-time and continuous-time diffusion models.
To the best of our knowledge, this work is the first to study the robustness of diffusion models to MIA in the text-to-speech task.
arXiv Detail & Related papers (2023-05-26T16:38:48Z) - Are Diffusion Models Vulnerable to Membership Inference Attacks? [26.35177414594631]
Diffusion-based generative models have shown great potential for image synthesis, but there is a lack of research on the security and privacy risks they may pose.
We investigate the vulnerability of diffusion models to Membership Inference Attacks (MIAs), a common privacy concern.
We propose Step-wise Error Comparing Membership Inference (SecMI), a query-based MIA that infers memberships by assessing the matching of forward process posterior estimation at each timestep.
arXiv Detail & Related papers (2023-02-02T18:43:16Z) - Membership Inference of Diffusion Models [9.355840335132124]
This paper systematically presents the first study about membership inference attacks against diffusion models.
Two attack methods are proposed, namely loss-based and likelihood-based attacks.
Our attack methods are evaluated on several state-of-the-art diffusion models, over different datasets in relation to privacy-sensitive data.
arXiv Detail & Related papers (2023-01-24T12:34:27Z) - Diffusion Models in Vision: A Survey [80.82832715884597]
A diffusion model is a deep generative model that is based on two stages, a forward diffusion stage and a reverse diffusion stage.
Diffusion models are widely appreciated for the quality and diversity of the generated samples, despite their known computational burdens.
arXiv Detail & Related papers (2022-09-10T22:00:30Z) - How Much is Enough? A Study on Diffusion Times in Score-based Generative
Models [76.76860707897413]
Current best practice advocates for a large T to ensure that the forward dynamics brings the diffusion sufficiently close to a known and simple noise distribution.
We show how an auxiliary model can be used to bridge the gap between the ideal and the simulated forward dynamics, followed by a standard reverse diffusion process.
arXiv Detail & Related papers (2022-06-10T15:09:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.