Are Diffusion Models Vulnerable to Membership Inference Attacks?
- URL: http://arxiv.org/abs/2302.01316v2
- Date: Tue, 30 May 2023 02:42:23 GMT
- Title: Are Diffusion Models Vulnerable to Membership Inference Attacks?
- Authors: Jinhao Duan, Fei Kong, Shiqi Wang, Xiaoshuang Shi, Kaidi Xu
- Abstract summary: Diffusion-based generative models have shown great potential for image synthesis, but there is a lack of research on the security and privacy risks they may pose.
We investigate the vulnerability of diffusion models to Membership Inference Attacks (MIAs), a common privacy concern.
We propose Step-wise Error Comparing Membership Inference (SecMI), a query-based MIA that infers memberships by assessing the matching of forward process posterior estimation at each timestep.
- Score: 26.35177414594631
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion-based generative models have shown great potential for image
synthesis, but there is a lack of research on the security and privacy risks
they may pose. In this paper, we investigate the vulnerability of diffusion
models to Membership Inference Attacks (MIAs), a common privacy concern. Our
results indicate that existing MIAs designed for GANs or VAE are largely
ineffective on diffusion models, either due to inapplicable scenarios (e.g.,
requiring the discriminator of GANs) or inappropriate assumptions (e.g., closer
distances between synthetic samples and member samples). To address this gap,
we propose Step-wise Error Comparing Membership Inference (SecMI), a
query-based MIA that infers memberships by assessing the matching of forward
process posterior estimation at each timestep. SecMI follows the common
overfitting assumption in MIA where member samples normally have smaller
estimation errors, compared with hold-out samples. We consider both the
standard diffusion models, e.g., DDPM, and the text-to-image diffusion models,
e.g., Latent Diffusion Models and Stable Diffusion. Experimental results
demonstrate that our methods precisely infer the membership with high
confidence on both of the two scenarios across multiple different datasets.
Code is available at https://github.com/jinhaoduan/SecMI.
Related papers
- Masked Diffusion Models are Secretly Time-Agnostic Masked Models and Exploit Inaccurate Categorical Sampling [47.82616476928464]
Masked diffusion models (MDMs) have emerged as a popular research topic for generative modeling of discrete data.
We show that both training and sampling of MDMs are theoretically free from the time variable.
We identify, for the first time, an underlying numerical issue, even with the commonly used 32-bit floating-point precision.
arXiv Detail & Related papers (2024-09-04T17:48:19Z) - Your Absorbing Discrete Diffusion Secretly Models the Conditional Distributions of Clean Data [55.54827581105283]
We show that the concrete score in absorbing diffusion can be expressed as conditional probabilities of clean data.
We propose a dedicated diffusion model without time-condition that characterizes the time-independent conditional probabilities.
Our models achieve SOTA performance among diffusion models on 5 zero-shot language modeling benchmarks.
arXiv Detail & Related papers (2024-06-06T04:22:11Z) - Intention-aware Denoising Diffusion Model for Trajectory Prediction [14.524496560759555]
Trajectory prediction is an essential component in autonomous driving, particularly for collision avoidance systems.
We propose utilizing the diffusion model to generate the distribution of future trajectories.
We propose an Intention-aware denoising Diffusion Model (IDM)
Our methods achieve state-of-the-art results, with an FDE of 13.83 pixels on the SDD dataset and 0.36 meters on the ETH/UCY dataset.
arXiv Detail & Related papers (2024-03-14T09:05:25Z) - Projection Regret: Reducing Background Bias for Novelty Detection via
Diffusion Models [72.07462371883501]
We propose emphProjection Regret (PR), an efficient novelty detection method that mitigates the bias of non-semantic information.
PR computes the perceptual distance between the test image and its diffusion-based projection to detect abnormality.
Extensive experiments demonstrate that PR outperforms the prior art of generative-model-based novelty detection methods by a significant margin.
arXiv Detail & Related papers (2023-12-05T09:44:47Z) - Semi-Implicit Denoising Diffusion Models (SIDDMs) [50.30163684539586]
Existing models such as Denoising Diffusion Probabilistic Models (DDPM) deliver high-quality, diverse samples but are slowed by an inherently high number of iterative steps.
We introduce a novel approach that tackles the problem by matching implicit and explicit factors.
We demonstrate that our proposed method obtains comparable generative performance to diffusion-based models and vastly superior results to models with a small number of sampling steps.
arXiv Detail & Related papers (2023-06-21T18:49:22Z) - PriSampler: Mitigating Property Inference of Diffusion Models [6.5990719141691825]
This work systematically presents the first privacy study about property inference attacks against diffusion models.
We propose a new model-agnostic plug-in method PriSampler to infer the risks of the property inference of diffusion models.
arXiv Detail & Related papers (2023-06-08T14:05:06Z) - An Efficient Membership Inference Attack for the Diffusion Model by
Proximal Initialization [58.88327181933151]
In this paper, we propose an efficient query-based membership inference attack (MIA)
Experimental results indicate that the proposed method can achieve competitive performance with only two queries on both discrete-time and continuous-time diffusion models.
To the best of our knowledge, this work is the first to study the robustness of diffusion models to MIA in the text-to-speech task.
arXiv Detail & Related papers (2023-05-26T16:38:48Z) - Membership Inference Attacks against Synthetic Data through Overfitting
Detection [84.02632160692995]
We argue for a realistic MIA setting that assumes the attacker has some knowledge of the underlying data distribution.
We propose DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model.
arXiv Detail & Related papers (2023-02-24T11:27:39Z) - Membership Inference Attacks against Diffusion Models [0.0]
Diffusion models have attracted attention in recent years as innovative generative models.
We investigate whether a diffusion model is resistant to a membership inference attack.
arXiv Detail & Related papers (2023-02-07T05:20:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.