Model Will Tell: Training Membership Inference for Diffusion Models
- URL: http://arxiv.org/abs/2403.08487v1
- Date: Wed, 13 Mar 2024 12:52:37 GMT
- Title: Model Will Tell: Training Membership Inference for Diffusion Models
- Authors: Xiaomeng Fu, Xi Wang, Qiao Li, Jin Liu, Jiao Dai and Jizhong Han
- Abstract summary: Training Membership Inference (TMI) task aims to determine whether a specific sample has been used in the training process of a target model.
In this paper, we explore a novel perspective for the TMI task by leveraging the intrinsic generative priors within the diffusion model.
- Score: 15.16244745642374
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion models pose risks of privacy breaches and copyright disputes,
primarily stemming from the potential utilization of unauthorized data during
the training phase. The Training Membership Inference (TMI) task aims to
determine whether a specific sample has been used in the training process of a
target model, representing a critical tool for privacy violation verification.
However, the increased stochasticity inherent in diffusion renders traditional
shadow-model-based or metric-based methods ineffective when applied to
diffusion models. Moreover, existing methods only yield binary classification
labels which lack necessary comprehensibility in practical applications. In
this paper, we explore a novel perspective for the TMI task by leveraging the
intrinsic generative priors within the diffusion model. Compared with unseen
samples, training samples exhibit stronger generative priors within the
diffusion model, enabling the successful reconstruction of substantially
degraded training images. Consequently, we propose the Degrade Restore Compare
(DRC) framework. In this framework, an image undergoes sequential degradation
and restoration, and its membership is determined by comparing it with the
restored counterpart. Experimental results verify that our approach not only
significantly outperforms existing methods in terms of accuracy but also
provides comprehensible decision criteria, offering evidence for potential
privacy violations.
Related papers
- Model Integrity when Unlearning with T2I Diffusion Models [11.321968363411145]
We propose approximate Machine Unlearning algorithms to reduce the generation of specific types of images, characterized by samples from a forget distribution''
We then propose unlearning algorithms that demonstrate superior effectiveness in preserving model integrity compared to existing baselines.
arXiv Detail & Related papers (2024-11-04T13:15:28Z) - Learning Diffusion Model from Noisy Measurement using Principled Expectation-Maximization Method [9.173055778539641]
We propose a principled expectation-maximization (EM) framework that iteratively learns diffusion models from noisy data with arbitrary corruption types.
Our framework employs a plug-and-play Monte Carlo method to accurately estimate clean images from noisy measurements, followed by training the diffusion model using the reconstructed images.
arXiv Detail & Related papers (2024-10-15T03:54:59Z) - Bridging Generative and Discriminative Models for Unified Visual
Perception with Diffusion Priors [56.82596340418697]
We propose a simple yet effective framework comprising a pre-trained Stable Diffusion (SD) model containing rich generative priors, a unified head (U-head) capable of integrating hierarchical representations, and an adapted expert providing discriminative priors.
Comprehensive investigations unveil potential characteristics of Vermouth, such as varying granularity of perception concealed in latent variables at distinct time steps and various U-net stages.
The promising results demonstrate the potential of diffusion models as formidable learners, establishing their significance in furnishing informative and robust visual representations.
arXiv Detail & Related papers (2024-01-29T10:36:57Z) - A Recycling Training Strategy for Medical Image Segmentation with
Diffusion Denoising Models [8.649603931882227]
Denoising diffusion models have found applications in image segmentation by generating segmented masks conditioned on images.
In this work, we focus on improving the training strategy and propose a novel recycling method.
We show that, under a fair comparison with the same network architectures and computing budget, the proposed recycling-based diffusion models achieved on-par performance with non-diffusion-based supervised training.
arXiv Detail & Related papers (2023-08-30T23:03:49Z) - Steerable Conditional Diffusion for Out-of-Distribution Adaptation in Medical Image Reconstruction [75.91471250967703]
We introduce a novel sampling framework called Steerable Conditional Diffusion.
This framework adapts the diffusion model, concurrently with image reconstruction, based solely on the information provided by the available measurement.
We achieve substantial enhancements in out-of-distribution performance across diverse imaging modalities.
arXiv Detail & Related papers (2023-08-28T08:47:06Z) - Diffusion Models for Image Restoration and Enhancement -- A
Comprehensive Survey [96.99328714941657]
We present a comprehensive review of recent diffusion model-based methods on image restoration.
We classify and emphasize the innovative designs using diffusion models for both IR and blind/real-world IR.
We propose five potential and challenging directions for the future research of diffusion model-based IR.
arXiv Detail & Related papers (2023-08-18T08:40:38Z) - Exploiting Diffusion Prior for Real-World Image Super-Resolution [75.5898357277047]
We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution.
By employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model.
arXiv Detail & Related papers (2023-05-11T17:55:25Z) - Exploring Continual Learning of Diffusion Models [24.061072903897664]
We evaluate the continual learning (CL) properties of diffusion models.
We provide insights into the dynamics of forgetting, which exhibit diverse behavior across diffusion timesteps.
arXiv Detail & Related papers (2023-03-27T15:52:14Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.