Any-Resolution AI-Generated Image Detection by Spectral Learning
- URL: http://arxiv.org/abs/2411.19417v1
- Date: Thu, 28 Nov 2024 23:55:19 GMT
- Title: Any-Resolution AI-Generated Image Detection by Spectral Learning
- Authors: Dimitrios Karageorgiou, Symeon Papadopoulos, Ioannis Kompatsiaris, Efstratios Gavves,
- Abstract summary: We build upon the key idea that the spectral distribution of real images constitutes both an invariant and highly discriminative pattern for AI-generated image detection.
Our approach achieves a 5.5% absolute improvement in AUC over the previous state-of-the-art across 13 recent generative approaches.
- Score: 36.562914181733426
- License:
- Abstract: Recent works have established that AI models introduce spectral artifacts into generated images and propose approaches for learning to capture them using labeled data. However, the significant differences in such artifacts among different generative models hinder these approaches from generalizing to generators not seen during training. In this work, we build upon the key idea that the spectral distribution of real images constitutes both an invariant and highly discriminative pattern for AI-generated image detection. To model this under a self-supervised setup, we employ masked spectral learning using the pretext task of frequency reconstruction. Since generated images constitute out-of-distribution samples for this model, we propose spectral reconstruction similarity to capture this divergence. Moreover, we introduce spectral context attention, which enables our approach to efficiently capture subtle spectral inconsistencies in images of any resolution. Our spectral AI-generated image detection approach (SPAI) achieves a 5.5% absolute improvement in AUC over the previous state-of-the-art across 13 recent generative approaches, while exhibiting robustness against common online perturbations.
Related papers
- Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.
In this paper, we investigate how detection performance varies across model backbones, types, and datasets.
We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - Oscillation Inversion: Understand the structure of Large Flow Model through the Lens of Inversion Method [60.88467353578118]
We show that a fixed-point-inspired iterative approach to invert real-world images does not achieve convergence, instead oscillating between distinct clusters.
We introduce a simple and fast distribution transfer technique that facilitates image enhancement, stroke-based recoloring, as well as visual prompt-guided image editing.
arXiv Detail & Related papers (2024-11-17T17:45:37Z) - StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model [62.25424831998405]
StealthDiffusion is a framework that modifies AI-generated images into high-quality, imperceptible adversarial examples.
It is effective in both white-box and black-box settings, transforming AI-generated images into high-quality adversarial forgeries.
arXiv Detail & Related papers (2024-08-11T01:22:29Z) - Spectrum Translation for Refinement of Image Generation (STIG) Based on
Contrastive Learning and Spectral Filter Profile [15.5188527312094]
We propose a framework to mitigate the disparity in frequency domain of the generated images.
This is realized by spectrum translation for the refinement of image generation (STIG) based on contrastive learning.
We evaluate our framework across eight fake image datasets and various cutting-edge models to demonstrate the effectiveness of STIG.
arXiv Detail & Related papers (2024-03-08T06:39:24Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - DiffUCD:Unsupervised Hyperspectral Image Change Detection with Semantic
Correlation Diffusion Model [46.68717345017946]
Hyperspectral image change detection (HSI-CD) has emerged as a crucial research area in remote sensing.
We propose a novel unsupervised HSI-CD with semantic correlation diffusion model (DiffUCD)
Our method can achieve comparable results to those fully supervised methods requiring numerous samples.
arXiv Detail & Related papers (2023-05-21T09:21:41Z) - Exploring the Asynchronous of the Frequency Spectra of GAN-generated
Facial Images [19.126496628073376]
We propose a new approach that explores the asynchronous frequency spectra of color channels, which is simple but effective for training both unsupervised and supervised learning models to distinguish GAN-based synthetic images.
Our experimental results show that the discrepancy of spectra in the frequency domain is a practical artifact to effectively detect various types of GAN-based generated images.
arXiv Detail & Related papers (2021-12-15T11:34:11Z) - Self-supervised GAN Detector [10.963740942220168]
generative models can be abused with malicious purposes, such as fraud, defamation, and fake news.
We propose a novel framework to distinguish the unseen generated images outside of the training settings.
Our proposed method is composed of the artificial fingerprint generator reconstructing the high-quality artificial fingerprints of GAN images.
arXiv Detail & Related papers (2021-11-12T06:19:04Z) - Spectral Distribution Aware Image Generation [11.295032417617456]
Deep generative models for photo-realistic images can not be easily distinguished from real images by the human eye.
We propose to generate images according to the frequency distribution of the real data by employing a spectral discriminator.
We show that the resulting models can better generate images with realistic frequency spectra, which are thus harder to detect by this cue.
arXiv Detail & Related papers (2020-12-05T19:46:48Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.