Noise as a Probe: Membership Inference Attacks on Diffusion Models Leveraging Initial Noise
- URL: http://arxiv.org/abs/2601.21628v1
- Date: Thu, 29 Jan 2026 12:29:01 GMT
- Title: Noise as a Probe: Membership Inference Attacks on Diffusion Models Leveraging Initial Noise
- Authors: Puwei Lian, Yujun Cai, Songze Li, Bingkun Bao,
- Abstract summary: Diffusion models have achieved remarkable progress in image generation, but their increasing deployment raises serious concerns about privacy.<n>In this work, we utilize a critical yet overlooked vulnerability: the widely used noise schedules fail to fully eliminate semantic information in the images.<n>We propose a simple yet effective membership inference attack, which injects semantic information into the initial noise and infers membership by analyzing the model's generation result.
- Score: 51.179816451161635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion models have achieved remarkable progress in image generation, but their increasing deployment raises serious concerns about privacy. In particular, fine-tuned models are highly vulnerable, as they are often fine-tuned on small and private datasets. Membership inference attacks (MIAs) are used to assess privacy risks by determining whether a specific sample was part of a model's training data. Existing MIAs against diffusion models either assume obtaining the intermediate results or require auxiliary datasets for training the shadow model. In this work, we utilized a critical yet overlooked vulnerability: the widely used noise schedules fail to fully eliminate semantic information in the images, resulting in residual semantic signals even at the maximum noise step. We empirically demonstrate that the fine-tuned diffusion model captures hidden correlations between the residual semantics in initial noise and the original images. Building on this insight, we propose a simple yet effective membership inference attack, which injects semantic information into the initial noise and infers membership by analyzing the model's generation result. Extensive experiments demonstrate that the semantic initial noise can strongly reveal membership information, highlighting the vulnerability of diffusion models to MIAs.
Related papers
- Noise Aggregation Analysis Driven by Small-Noise Injection: Efficient Membership Inference for Diffusion Models [19.763802072516228]
A key concern is membership inference attacks, which attempt to determine whether a particular data sample was used in the model training process.<n>We propose an efficient membership inference attack method against diffusion models.<n>Our method can also show better attack effects in ASR and AUC when facing large-scale text-to-image diffusion models.
arXiv Detail & Related papers (2025-10-18T16:28:48Z) - On the MIA Vulnerability Gap Between Private GANs and Diffusion Models [51.53790101362898]
Generative Adversarial Networks (GANs) and diffusion models have emerged as leading approaches for high-quality image synthesis.<n>We present the first unified theoretical and empirical analysis of the privacy risks faced by differentially private generative models.
arXiv Detail & Related papers (2025-09-03T14:18:22Z) - Unveiling Impact of Frequency Components on Membership Inference Attacks for Diffusion Models [51.179816451161635]
Membership Inference Attacks (MIAs) are designed to ascertain whether specific data were utilized during a model's training phase.<n>We formalize them into a unified general paradigm which computes the membership score for membership identification.<n>Under this paradigm, we empirically find that existing attacks overlook the inherent deficiency in how diffusion models process high-frequency information.<n>We propose a plug-and-play high-frequency filter module to mitigate the adverse effects of the deficiency.
arXiv Detail & Related papers (2025-05-27T09:50:11Z) - DynaNoise: Dynamic Probabilistic Noise Injection for Defending Against Membership Inference Attacks [6.610581923321801]
Membership Inference Attacks (MIAs) pose a significant risk to the privacy of training datasets.<n>Traditional mitigation techniques rely on injecting a fixed amount of noise during training or inference.<n>We present DynaNoise, an adaptive approach that dynamically modulates noise injection based on query sensitivity.
arXiv Detail & Related papers (2025-05-19T17:07:00Z) - MIGA: Mutual Information-Guided Attack on Denoising Models for Semantic Manipulation [39.12448251986432]
We propose Mutual Information-Guided Attack (MIGA) to directly attack deep denoising models.<n>MIGA strategically disrupts denoising models' ability to preserve semantic content via adversarial perturbations.<n>Our findings suggest that denoising models are not always robust and can introduce security risks in real-world applications.
arXiv Detail & Related papers (2025-03-10T06:26:34Z) - Impact of Noisy Supervision in Foundation Model Learning [91.56591923244943]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.<n>We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Stable Unlearnable Example: Enhancing the Robustness of Unlearnable
Examples via Stable Error-Minimizing Noise [31.586389548657205]
Unlearnable example is proposed to significantly degrade the generalization performance of models by adding a kind of imperceptible noise to the data.
We introduce stable error-minimizing noise (SEM), which trains the defensive noise against random perturbation instead of the time-consuming adversarial perturbation.
SEM achieves a new state-of-the-art performance on CIFAR-10, CIFAR-100, and ImageNet Subset.
arXiv Detail & Related papers (2023-11-22T01:43:57Z) - ExposureDiffusion: Learning to Expose for Low-light Image Enhancement [87.08496758469835]
This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model.
Our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models.
The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks.
arXiv Detail & Related papers (2023-07-15T04:48:35Z) - An Efficient Membership Inference Attack for the Diffusion Model by
Proximal Initialization [58.88327181933151]
In this paper, we propose an efficient query-based membership inference attack (MIA)
Experimental results indicate that the proposed method can achieve competitive performance with only two queries on both discrete-time and continuous-time diffusion models.
To the best of our knowledge, this work is the first to study the robustness of diffusion models to MIA in the text-to-speech task.
arXiv Detail & Related papers (2023-05-26T16:38:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.