Fake or JPEG? Revealing Common Biases in Generated Image Detection Datasets
- URL: http://arxiv.org/abs/2403.17608v2
- Date: Thu, 28 Mar 2024 15:24:16 GMT
- Title: Fake or JPEG? Revealing Common Biases in Generated Image Detection Datasets
- Authors: Patrick Grommelt, Louis Weiss, Franz-Josef Pfreundt, Janis Keuper,
- Abstract summary: Many datasets for AI-generated image detection contain biases related to JPEG compression and image size.
We demonstrate that detectors indeed learn from these undesired factors.
It leads to more than 11 percentage points increase in cross-generator performance for ResNet50 and Swin-T detectors.
- Score: 6.554757265434464
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The widespread adoption of generative image models has highlighted the urgent need to detect artificial content, which is a crucial step in combating widespread manipulation and misinformation. Consequently, numerous detectors and associated datasets have emerged. However, many of these datasets inadvertently introduce undesirable biases, thereby impacting the effectiveness and evaluation of detectors. In this paper, we emphasize that many datasets for AI-generated image detection contain biases related to JPEG compression and image size. Using the GenImage dataset, we demonstrate that detectors indeed learn from these undesired factors. Furthermore, we show that removing the named biases substantially increases robustness to JPEG compression and significantly alters the cross-generator performance of evaluated detectors. Specifically, it leads to more than 11 percentage points increase in cross-generator performance for ResNet50 and Swin-T detectors on the GenImage dataset, achieving state-of-the-art results. We provide the dataset and source codes of this paper on the anonymous website: https://www.unbiased-genimage.org
Related papers
- Improving Interpretability and Robustness for the Detection of AI-Generated Images [6.116075037154215]
We analyze existing state-of-the-art AIGI detection methods based on frozen CLIP embeddings.
We show how to interpret them, shedding light on how images produced by various AI generators differ from real ones.
arXiv Detail & Related papers (2024-06-21T10:33:09Z) - GenFace: A Large-Scale Fine-Grained Face Forgery Benchmark and Cross
Appearance-Edge Learning [49.93362169016503]
The rapid advancement of photorealistic generators has reached a critical juncture where the discrepancy between authentic and manipulated images is increasingly indistinguishable.
Although there have been a number of publicly available face forgery datasets, the forgery faces are mostly generated using GAN-based synthesis technology.
We propose a large-scale, diverse, and fine-grained high-fidelity dataset, namely GenFace, to facilitate the advancement of deepfake detection.
arXiv Detail & Related papers (2024-02-03T03:13:50Z) - GenDet: Towards Good Generalizations for AI-Generated Image Detection [27.899521298845357]
Existing methods can effectively detect images generated by seen generators, but it is challenging to detect those generated by unseen generators.
This paper addresses the unseen-generator detection problem by considering this task from the perspective of anomaly detection.
Our method encourages smaller output discrepancies between the student and the teacher models for real images while aiming for larger discrepancies for fake images.
arXiv Detail & Related papers (2023-12-12T11:20:45Z) - Invisible Relevance Bias: Text-Image Retrieval Models Prefer AI-Generated Images [67.18010640829682]
We show that AI-generated images introduce an invisible relevance bias to text-image retrieval models.
The inclusion of AI-generated images in the training data of the retrieval models exacerbates the invisible relevance bias.
We propose an effective training method aimed at alleviating the invisible relevance bias.
arXiv Detail & Related papers (2023-11-23T16:22:58Z) - Exposing Image Splicing Traces in Scientific Publications via Uncertainty-guided Refinement [30.698359275889363]
A surge in scientific publications suspected of image manipulation has led to numerous retractions.
Image splicing detection is more challenging due to the lack of reference images and the typically small tampered areas.
We propose an Uncertainty-guided Refinement Network (URN) to mitigate the impact of disruptive factors.
arXiv Detail & Related papers (2023-09-28T12:36:12Z) - On quantifying and improving realism of images generated with diffusion [50.37578424163951]
We propose a metric, called Image Realism Score (IRS), computed from five statistical measures of a given image.
IRS is easily usable as a measure to classify a given image as real or fake.
We experimentally establish the model- and data-agnostic nature of the proposed IRS by successfully detecting fake images generated by Stable Diffusion Model (SDM), Dalle2, Midjourney and BigGAN.
Our efforts have also led to Gen-100 dataset, which provides 1,000 samples for 100 classes generated by four high-quality models.
arXiv Detail & Related papers (2023-09-26T08:32:55Z) - GenImage: A Million-Scale Benchmark for Detecting AI-Generated Image [28.38575401686718]
We introduce the GenImage dataset, which includes over one million pairs of AI-generated fake images and collected real images.
The advantages allow the detectors trained on GenImage to undergo a thorough evaluation and demonstrate strong applicability to diverse images.
We conduct a comprehensive analysis of the dataset and propose two tasks for evaluating the detection method in resembling real-world scenarios.
arXiv Detail & Related papers (2023-06-14T15:21:09Z) - Detecting Anomalies using Generative Adversarial Networks on Images [0.0]
This paper proposes a novel Generative Adversarial Network (GAN) based model for anomaly detection.
It uses normal (non-anomalous) images to learn about the normality based on which it detects if an input image contains an anomalous/threat object.
Experiments are performed on three datasets, viz. CIFAR-10, MVTec AD (for industrial applications) and SIXray (for X-ray baggage security)
arXiv Detail & Related papers (2022-11-24T21:52:25Z) - Revisiting Consistency Regularization for Semi-supervised Change
Detection in Remote Sensing Images [60.89777029184023]
We propose a semi-supervised CD model in which we formulate an unsupervised CD loss in addition to the supervised Cross-Entropy (CE) loss.
Experiments conducted on two publicly available CD datasets show that the proposed semi-supervised CD method can reach closer to the performance of supervised CD.
arXiv Detail & Related papers (2022-04-18T17:59:01Z) - Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis [69.09526348527203]
Deep generative models have led to highly realistic media, known as deepfakes, that are commonly indistinguishable from real to human eyes.
We propose a novel fake detection that is designed to re-synthesize testing images and extract visual cues for detection.
We demonstrate the improved effectiveness, cross-GAN generalization, and robustness against perturbations of our approach in a variety of detection scenarios.
arXiv Detail & Related papers (2021-05-29T21:22:24Z) - Robust Data Hiding Using Inverse Gradient Attention [82.73143630466629]
In the data hiding task, each pixel of cover images should be treated differently since they have divergent tolerabilities.
We propose a novel deep data hiding scheme with Inverse Gradient Attention (IGA), combing the ideas of adversarial learning and attention mechanism.
Empirically, extensive experiments show that the proposed model outperforms the state-of-the-art methods on two prevalent datasets.
arXiv Detail & Related papers (2020-11-21T19:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.