Exposing the Fake: Effective Diffusion-Generated Images Detection
- URL: http://arxiv.org/abs/2307.06272v1
- Date: Wed, 12 Jul 2023 16:16:37 GMT
- Title: Exposing the Fake: Effective Diffusion-Generated Images Detection
- Authors: Ruipeng Ma, Jinhao Duan, Fei Kong, Xiaoshuang Shi, Kaidi Xu
- Abstract summary: This paper proposes a novel detection method called Stepwise Error for Diffusion-generated Image Detection (SeDID)
SeDID exploits the unique attributes of diffusion models, namely deterministic reverse and deterministic denoising errors.
Our work makes a pivotal contribution to distinguishing diffusion model-generated images, marking a significant step in the domain of artificial intelligence security.
- Score: 14.646957596560076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image synthesis has seen significant advancements with the advent of
diffusion-based generative models like Denoising Diffusion Probabilistic Models
(DDPM) and text-to-image diffusion models. Despite their efficacy, there is a
dearth of research dedicated to detecting diffusion-generated images, which
could pose potential security and privacy risks. This paper addresses this gap
by proposing a novel detection method called Stepwise Error for
Diffusion-generated Image Detection (SeDID). Comprising statistical-based
$\text{SeDID}_{\text{Stat}}$ and neural network-based
$\text{SeDID}_{\text{NNs}}$, SeDID exploits the unique attributes of diffusion
models, namely deterministic reverse and deterministic denoising computation
errors. Our evaluations demonstrate SeDID's superior performance over existing
methods when applied to diffusion models. Thus, our work makes a pivotal
contribution to distinguishing diffusion model-generated images, marking a
significant step in the domain of artificial intelligence security.
Related papers
- StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model [62.25424831998405]
StealthDiffusion is a framework that modifies AI-generated images into high-quality, imperceptible adversarial examples.
It is effective in both white-box and black-box settings, transforming AI-generated images into high-quality adversarial forgeries.
arXiv Detail & Related papers (2024-08-11T01:22:29Z) - Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - DIAGNOSIS: Detecting Unauthorized Data Usages in Text-to-image Diffusion Models [79.71665540122498]
We propose a method for detecting unauthorized data usage by planting the injected content into the protected dataset.
Specifically, we modify the protected images by adding unique contents on these images using stealthy image warping functions.
By analyzing whether the model has memorized the injected content, we can detect models that had illegally utilized the unauthorized data.
arXiv Detail & Related papers (2023-07-06T16:27:39Z) - Detecting Images Generated by Deep Diffusion Models using their Local
Intrinsic Dimensionality [8.968599131722023]
Diffusion models have been successfully applied for the visual synthesis of strikingly realistic appearing images.
This raises strong concerns about their potential for malicious purposes.
We propose using the lightweight multi Local Intrinsic Dimensionality (multiLID) for the automatic detection of synthetic images.
arXiv Detail & Related papers (2023-07-05T15:03:10Z) - CamoDiffusion: Camouflaged Object Detection via Conditional Diffusion
Models [72.93652777646233]
Camouflaged Object Detection (COD) is a challenging task in computer vision due to the high similarity between camouflaged objects and their surroundings.
We propose a new paradigm that treats COD as a conditional mask-generation task leveraging diffusion models.
Our method, dubbed CamoDiffusion, employs the denoising process of diffusion models to iteratively reduce the noise of the mask.
arXiv Detail & Related papers (2023-05-29T07:49:44Z) - DIRE for Diffusion-Generated Image Detection [128.95822613047298]
We propose a novel representation called DIffusion Reconstruction Error (DIRE)
DIRE measures the error between an input image and its reconstruction counterpart by a pre-trained diffusion model.
It provides a hint that DIRE can serve as a bridge to distinguish generated and real images.
arXiv Detail & Related papers (2023-03-16T13:15:03Z) - SinDiffusion: Learning a Diffusion Model from a Single Natural Image [159.4285444680301]
We present SinDiffusion, leveraging denoising diffusion models to capture internal distribution of patches from a single natural image.
It is based on two core designs. First, SinDiffusion is trained with a single model at a single scale instead of multiple models with progressive growing of scales.
Second, we identify that a patch-level receptive field of the diffusion network is crucial and effective for capturing the image's patch statistics.
arXiv Detail & Related papers (2022-11-22T18:00:03Z) - Fast Unsupervised Brain Anomaly Detection and Segmentation with
Diffusion Models [1.6352599467675781]
We propose a method based on diffusion models to detect and segment anomalies in brain imaging.
Our diffusion models achieve competitive performance compared with autoregressive approaches across a series of experiments with 2D CT and MRI data.
arXiv Detail & Related papers (2022-06-07T17:30:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.