Enhancing Frequency Forgery Clues for Diffusion-Generated Image Detection
- URL: http://arxiv.org/abs/2511.00429v1
- Date: Sat, 01 Nov 2025 06:58:05 GMT
- Title: Enhancing Frequency Forgery Clues for Diffusion-Generated Image Detection
- Authors: Daichi Zhang, Tong Zhang, Shiming Ge, Sabine Süsstrunk,
- Abstract summary: Diffusion models have achieved remarkable success in image synthesis, but the generated high-quality images raise concerns about potential malicious use.<n>Existing detectors often struggle to capture discriminative clues across different models and settings.<n>We propose a simple yet effective representation by enhancing the Frequency Forgery Clue (F2C) across all frequency bands.
- Score: 46.59140701145731
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion models have achieved remarkable success in image synthesis, but the generated high-quality images raise concerns about potential malicious use. Existing detectors often struggle to capture discriminative clues across different models and settings, limiting their generalization to unseen diffusion models and robustness to various perturbations. To address this issue, we observe that diffusion-generated images exhibit progressively larger differences from natural real images across low- to high-frequency bands. Based on this insight, we propose a simple yet effective representation by enhancing the Frequency Forgery Clue (F^2C) across all frequency bands. Specifically, we introduce a frequency-selective function which serves as a weighted filter to the Fourier spectrum, suppressing less discriminative bands while enhancing more informative ones. This approach, grounded in a comprehensive analysis of frequency-based differences between natural real and diffusion-generated images, enables general detection of images from unseen diffusion models and provides robust resilience to various perturbations. Extensive experiments on various diffusion-generated image datasets demonstrate that our method outperforms state-of-the-art detectors with superior generalization and robustness.
Related papers
- Detecting AI-Generated Images via Diffusion Snap-Back Reconstruction: A Forensic Approach [0.0]
Traditional deepfake detection methods fail against text-to-image systems such as Stable Diffusion and DALL-E.<n>This paper introduces a diffusion-based forensic framework that leverages multi-strength image reconstruction dynamics.
arXiv Detail & Related papers (2025-11-01T01:35:54Z) - Generalizable AI-Generated Image Detection Based on Fractal Self-Similarity in the Spectrum [38.302088844940556]
We propose a novel detection method based on the fractal self-similarity of the spectrum.<n>We show that AI-generated images exhibit fractal-like spectral growth through periodic extension and low-pass filtering.<n>Our method mitigates the impact of varying spectral characteristics across different generators, improving detection performance for images from unseen models.
arXiv Detail & Related papers (2025-03-11T14:37:06Z) - Explainable Synthetic Image Detection through Diffusion Timestep Ensembling [30.298198387824275]
We propose a novel synthetic image detection method that directly utilizes features of intermediately noised images by training an ensemble on multiple noised timesteps.<n>To enhance human comprehension, we introduce a metric-grounded explanation generation and refinement module.<n>Our method achieves state-of-the-art performance with 98.91% and 95.89% detection accuracy on regular and challenging samples respectively.
arXiv Detail & Related papers (2025-03-08T13:04:20Z) - FIRE: Robust Detection of Diffusion-Generated Images via Frequency-Guided Reconstruction Error [16.185085063881772]
diffusion models struggle to accurately reconstruct mid-band frequency information in real images.<n>Fire-guided Reconstruction Error is first to investigate influence of frequency decomposition on reconstruction error.<n>Experiments show that FIRE generalizes effectively to unseen diffusion models and maintains robustness against diverse perturbations.
arXiv Detail & Related papers (2024-12-10T03:02:34Z) - Ultrasound Image Enhancement with the Variance of Diffusion Models [7.360352432782388]
Enhancing ultrasound images requires a delicate balance between contrast, resolution, and speckle preservation.
This paper introduces a novel approach that integrates adaptive beamforming with denoising diffusion-based variance imaging.
arXiv Detail & Related papers (2024-09-17T17:29:33Z) - StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model [62.25424831998405]
StealthDiffusion is a framework that modifies AI-generated images into high-quality, imperceptible adversarial examples.
It is effective in both white-box and black-box settings, transforming AI-generated images into high-quality adversarial forgeries.
arXiv Detail & Related papers (2024-08-11T01:22:29Z) - Diffusion Facial Forgery Detection [56.69763252655695]
This paper introduces DiFF, a comprehensive dataset dedicated to face-focused diffusion-generated images.
We conduct extensive experiments on the DiFF dataset via a human test and several representative forgery detection methods.
The results demonstrate that the binary detection accuracy of both human observers and automated detectors often falls below 30%.
arXiv Detail & Related papers (2024-01-29T03:20:19Z) - Denoising Diffusion Models for Plug-and-Play Image Restoration [135.6359475784627]
This paper proposes DiffPIR, which integrates the traditional plug-and-play method into the diffusion sampling framework.
Compared to plug-and-play IR methods that rely on discriminative Gaussian denoisers, DiffPIR is expected to inherit the generative ability of diffusion models.
arXiv Detail & Related papers (2023-05-15T20:24:38Z) - Your Diffusion Model is Secretly a Zero-Shot Classifier [90.40799216880342]
We show that density estimates from large-scale text-to-image diffusion models can be leveraged to perform zero-shot classification.
Our generative approach to classification attains strong results on a variety of benchmarks.
Our results are a step toward using generative over discriminative models for downstream tasks.
arXiv Detail & Related papers (2023-03-28T17:59:56Z) - DIRE for Diffusion-Generated Image Detection [128.95822613047298]
We propose a novel representation called DIffusion Reconstruction Error (DIRE)
DIRE measures the error between an input image and its reconstruction counterpart by a pre-trained diffusion model.
It provides a hint that DIRE can serve as a bridge to distinguish generated and real images.
arXiv Detail & Related papers (2023-03-16T13:15:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.