PriSampler: Mitigating Property Inference of Diffusion Models
- URL: http://arxiv.org/abs/2306.05208v2
- Date: Mon, 29 Apr 2024 08:47:02 GMT
- Title: PriSampler: Mitigating Property Inference of Diffusion Models
- Authors: Hailong Hu, Jun Pang,
- Abstract summary: This work systematically presents the first privacy study about property inference attacks against diffusion models.
We propose a new model-agnostic plug-in method PriSampler to infer the risks of the property inference of diffusion models.
- Score: 6.5990719141691825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion models have been remarkably successful in data synthesis. However, when these models are applied to sensitive datasets, such as banking and human face data, they might bring up severe privacy concerns. This work systematically presents the first privacy study about property inference attacks against diffusion models, where adversaries aim to extract sensitive global properties of its training set from a diffusion model. Specifically, we focus on the most practical attack scenario: adversaries are restricted to accessing only synthetic data. Under this realistic scenario, we conduct a comprehensive evaluation of property inference attacks on various diffusion models trained on diverse data types, including tabular and image datasets. A broad range of evaluations reveals that diffusion models and their samplers are universally vulnerable to property inference attacks. In response, we propose a new model-agnostic plug-in method PriSampler to mitigate the risks of the property inference of diffusion models. PriSampler can be directly applied to well-trained diffusion models and support both stochastic and deterministic sampling. Extensive experiments illustrate the effectiveness of our defense, and it can lead adversaries to infer the proportion of properties as close as predefined values that model owners wish. Notably, PriSampler also shows its significantly superior performance to diffusion models trained with differential privacy on both model utility and defense performance. This work will elevate the awareness of preventing property inference attacks and encourage privacy-preserving synthetic data release.
Related papers
- Constrained Diffusion Models via Dual Training [80.03953599062365]
Diffusion processes are prone to generating samples that reflect biases in a training dataset.
We develop constrained diffusion models by imposing diffusion constraints based on desired distributions.
We show that our constrained diffusion models generate new data from a mixture data distribution that achieves the optimal trade-off among objective and constraints.
arXiv Detail & Related papers (2024-08-27T14:25:42Z) - Efficient Differentially Private Fine-Tuning of Diffusion Models [15.71777343534365]
Fine-tuning large diffusion models with DP-SGD can be very resource-demanding in terms of memory usage and computation.
In this work, we investigate Efficient Fine-Tuning (PEFT) of diffusion models using Low-Dimensional Adaptation (LoDA) with Differential Privacy.
Our source code will be made available on GitHub.
arXiv Detail & Related papers (2024-06-07T21:00:20Z) - Consistent Diffusion Meets Tweedie: Training Exact Ambient Diffusion Models with Noisy Data [74.2507346810066]
Ambient diffusion is a recently proposed framework for training diffusion models using corrupted data.
We present the first framework for training diffusion models that provably sample from the uncorrupted distribution given only noisy training data.
arXiv Detail & Related papers (2024-03-20T14:22:12Z) - Exploring Privacy and Fairness Risks in Sharing Diffusion Models: An
Adversarial Perspective [31.010937126289953]
We take an adversarial perspective to investigate the potential privacy and fairness risks associated with sharing of diffusion models.
We demonstrate that the sharer can execute fairness poisoning attacks to undermine the receiver's downstream models.
Our experiments conducted on real-world datasets demonstrate remarkable attack performance on different types of diffusion models.
arXiv Detail & Related papers (2024-02-28T12:21:12Z) - Guided Diffusion from Self-Supervised Diffusion Features [49.78673164423208]
Guidance serves as a key concept in diffusion models, yet its effectiveness is often limited by the need for extra data annotation or pretraining.
We propose a framework to extract guidance from, and specifically for, diffusion models.
arXiv Detail & Related papers (2023-12-14T11:19:11Z) - Membership Inference Attacks against Synthetic Data through Overfitting
Detection [84.02632160692995]
We argue for a realistic MIA setting that assumes the attacker has some knowledge of the underlying data distribution.
We propose DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model.
arXiv Detail & Related papers (2023-02-24T11:27:39Z) - Data Forensics in Diffusion Models: A Systematic Analysis of Membership
Privacy [62.16582309504159]
We develop a systematic analysis of membership inference attacks on diffusion models and propose novel attack methods tailored to each attack scenario.
Our approach exploits easily obtainable quantities and is highly effective, achieving near-perfect attack performance (>0.9 AUCROC) in realistic scenarios.
arXiv Detail & Related papers (2023-02-15T17:37:49Z) - Membership Inference Attacks against Diffusion Models [0.0]
Diffusion models have attracted attention in recent years as innovative generative models.
We investigate whether a diffusion model is resistant to a membership inference attack.
arXiv Detail & Related papers (2023-02-07T05:20:20Z) - Are Diffusion Models Vulnerable to Membership Inference Attacks? [26.35177414594631]
Diffusion-based generative models have shown great potential for image synthesis, but there is a lack of research on the security and privacy risks they may pose.
We investigate the vulnerability of diffusion models to Membership Inference Attacks (MIAs), a common privacy concern.
We propose Step-wise Error Comparing Membership Inference (SecMI), a query-based MIA that infers memberships by assessing the matching of forward process posterior estimation at each timestep.
arXiv Detail & Related papers (2023-02-02T18:43:16Z) - Extracting Training Data from Diffusion Models [77.11719063152027]
We show that diffusion models memorize individual images from their training data and emit them at generation time.
With a generate-and-filter pipeline, we extract over a thousand training examples from state-of-the-art models.
We train hundreds of diffusion models in various settings to analyze how different modeling and data decisions affect privacy.
arXiv Detail & Related papers (2023-01-30T18:53:09Z) - Membership Inference of Diffusion Models [9.355840335132124]
This paper systematically presents the first study about membership inference attacks against diffusion models.
Two attack methods are proposed, namely loss-based and likelihood-based attacks.
Our attack methods are evaluated on several state-of-the-art diffusion models, over different datasets in relation to privacy-sensitive data.
arXiv Detail & Related papers (2023-01-24T12:34:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.