AI-Generated Image Detection using a Cross-Attention Enhanced
Dual-Stream Network
- URL: http://arxiv.org/abs/2306.07005v2
- Date: Thu, 9 Nov 2023 04:49:47 GMT
- Title: AI-Generated Image Detection using a Cross-Attention Enhanced
Dual-Stream Network
- Authors: Ziyi Xi, Wenmin Huang, Kangkang Wei, Weiqi Luo and Peijia Zheng
- Abstract summary: Our research focuses on the text-to-image generation process in AIGC.
We develop a robust dual-stream network comprised of a residual stream and a content stream.
Our method consistently outperforms traditional CG detection techniques across a range of image resolutions.
- Score: 10.535234861120209
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid evolution of AI Generated Content (AIGC), forged images
produced through this technology are inherently more deceptive and require less
human intervention compared to traditional Computer-generated Graphics (CG).
However, owing to the disparities between CG and AIGC, conventional CG
detection methods tend to be inadequate in identifying AIGC-produced images. To
address this issue, our research concentrates on the text-to-image generation
process in AIGC. Initially, we first assemble two text-to-image databases
utilizing two distinct AI systems, DALLE2 and DreamStudio. Aiming to
holistically capture the inherent anomalies produced by AIGC, we develope a
robust dual-stream network comprised of a residual stream and a content stream.
The former employs the Spatial Rich Model (SRM) to meticulously extract various
texture information from images, while the latter seeks to capture additional
forged traces in low frequency, thereby extracting complementary information
that the residual stream may overlook. To enhance the information exchange
between these two streams, we incorporate a cross multi-head attention
mechanism. Numerous comparative experiments are performed on both databases,
and the results show that our detection method consistently outperforms
traditional CG detection techniques across a range of image resolutions.
Moreover, our method exhibits superior performance through a series of
robustness tests and cross-database experiments. When applied to widely
recognized traditional CG benchmarks such as SPL2018 and DsTok, our approach
significantly exceeds the capabilities of other existing methods in the field
of CG detection.
Related papers
- Improving Interpretability and Robustness for the Detection of AI-Generated Images [6.116075037154215]
We analyze existing state-of-the-art AIGI detection methods based on frozen CLIP embeddings.
We show how to interpret them, shedding light on how images produced by various AI generators differ from real ones.
arXiv Detail & Related papers (2024-06-21T10:33:09Z) - DA-HFNet: Progressive Fine-Grained Forgery Image Detection and Localization Based on Dual Attention [12.36906630199689]
We construct a DA-HFNet forged image dataset guided by text or image-assisted GAN and Diffusion model.
Our goal is to utilize a hierarchical progressive network to capture forged artifacts at different scales for detection and localization.
arXiv Detail & Related papers (2024-06-03T16:13:33Z) - RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection [60.960988614701414]
RIGID is a training-free and model-agnostic method for robust AI-generated image detection.
RIGID significantly outperforms existing trainingbased and training-free detectors.
arXiv Detail & Related papers (2024-05-30T14:49:54Z) - Dual-Image Enhanced CLIP for Zero-Shot Anomaly Detection [58.228940066769596]
We introduce a Dual-Image Enhanced CLIP approach, leveraging a joint vision-language scoring system.
Our methods process pairs of images, utilizing each as a visual reference for the other, thereby enriching the inference process with visual context.
Our approach significantly exploits the potential of vision-language joint anomaly detection and demonstrates comparable performance with current SOTA methods across various datasets.
arXiv Detail & Related papers (2024-05-08T03:13:20Z) - Robust CLIP-Based Detector for Exposing Diffusion Model-Generated Images [13.089550724738436]
Diffusion models (DMs) have revolutionized image generation, producing high-quality images with applications spanning various fields.
Their ability to create hyper-realistic images poses significant challenges in distinguishing between real and synthetic content.
This work introduces a robust detection framework that integrates image and text features extracted by CLIP model with a Multilayer Perceptron (MLP) classifier.
arXiv Detail & Related papers (2024-04-19T14:30:41Z) - GenFace: A Large-Scale Fine-Grained Face Forgery Benchmark and Cross
Appearance-Edge Learning [49.93362169016503]
The rapid advancement of photorealistic generators has reached a critical juncture where the discrepancy between authentic and manipulated images is increasingly indistinguishable.
Although there have been a number of publicly available face forgery datasets, the forgery faces are mostly generated using GAN-based synthesis technology.
We propose a large-scale, diverse, and fine-grained high-fidelity dataset, namely GenFace, to facilitate the advancement of deepfake detection.
arXiv Detail & Related papers (2024-02-03T03:13:50Z) - Additional Look into GAN-based Augmentation for Deep Learning COVID-19
Image Classification [57.1795052451257]
We study the dependence of the GAN-based augmentation performance on dataset size with a focus on small samples.
We train StyleGAN2-ADA with both sets and then, after validating the quality of generated images, we use trained GANs as one of the augmentations approaches in multi-class classification problems.
The GAN-based augmentation approach is found to be comparable with classical augmentation in the case of medium and large datasets but underperforms in the case of smaller datasets.
arXiv Detail & Related papers (2024-01-26T08:28:13Z) - Hybrid-Supervised Dual-Search: Leveraging Automatic Learning for
Loss-free Multi-Exposure Image Fusion [60.221404321514086]
Multi-exposure image fusion (MEF) has emerged as a prominent solution to address the limitations of digital imaging in representing varied exposure levels.
This paper presents a Hybrid-Supervised Dual-Search approach for MEF, dubbed HSDS-MEF, which introduces a bi-level optimization search scheme for automatic design of both network structures and loss functions.
arXiv Detail & Related papers (2023-09-03T08:07:26Z) - Joint Learning of Deep Texture and High-Frequency Features for
Computer-Generated Image Detection [24.098604827919203]
We propose a joint learning strategy with deep texture and high-frequency features for CG image detection.
A semantic segmentation map is generated to guide the affine transformation operation.
The combination of the original image and the high-frequency components of the original and rendered images are fed into a multi-branch neural network equipped with attention mechanisms.
arXiv Detail & Related papers (2022-09-07T17:30:40Z) - Target-aware Dual Adversarial Learning and a Multi-scenario
Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [65.30079184700755]
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks.
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
arXiv Detail & Related papers (2022-03-30T11:44:56Z) - Data Augmentation via Mixed Class Interpolation using Cycle-Consistent
Generative Adversarial Networks Applied to Cross-Domain Imagery [16.870604081967866]
Machine learning driven object detection and classification within non-visible imagery has an important role in many fields.
However, such applications often suffer due to the limited quantity and variety of non-visible spectral domain imagery.
This paper proposes and evaluates a novel data augmentation approach that leverages the more readily available visible-band imagery.
arXiv Detail & Related papers (2020-05-05T18:53:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.