Seeing is not always believing: Benchmarking Human and Model Perception
of AI-Generated Images
- URL: http://arxiv.org/abs/2304.13023v3
- Date: Fri, 22 Sep 2023 18:16:28 GMT
- Title: Seeing is not always believing: Benchmarking Human and Model Perception
of AI-Generated Images
- Authors: Zeyu Lu, Di Huang, Lei Bai, Jingjing Qu, Chengyue Wu, Xihui Liu, Wanli
Ouyang
- Abstract summary: There is a growing concern that the advancement of artificial intelligence (AI) technology may produce fake photos.
This study aims to comprehensively evaluate agents for distinguishing state-of-the-art AI-generated visual content.
- Score: 66.20578637253831
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Photos serve as a way for humans to record what they experience in their
daily lives, and they are often regarded as trustworthy sources of information.
However, there is a growing concern that the advancement of artificial
intelligence (AI) technology may produce fake photos, which can create
confusion and diminish trust in photographs. This study aims to comprehensively
evaluate agents for distinguishing state-of-the-art AI-generated visual
content. Our study benchmarks both human capability and cutting-edge fake image
detection AI algorithms, using a newly collected large-scale fake image dataset
Fake2M. In our human perception evaluation, titled HPBench, we discovered that
humans struggle significantly to distinguish real photos from AI-generated
ones, with a misclassification rate of 38.7%. Along with this, we conduct the
model capability of AI-Generated images detection evaluation MPBench and the
top-performing model from MPBench achieves a 13% failure rate under the same
setting used in the human evaluation. We hope that our study can raise
awareness of the potential risks of AI-generated images and facilitate further
research to prevent the spread of false information. More information can refer
to https://github.com/Inf-imagine/Sentry.
Related papers
- Self-Supervised Learning for Detecting AI-Generated Faces as Anomalies [58.11545090128854]
We describe an anomaly detection method for AI-generated faces by leveraging self-supervised learning of camera-intrinsic and face-specific features purely from photographic face images.
The success of our method lies in designing a pretext task that trains a feature extractor to rank four ordinal exchangeable image file format (EXIF) tags and classify artificially manipulated face images.
arXiv Detail & Related papers (2025-01-04T06:23:24Z) - Human vs. AI: A Novel Benchmark and a Comparative Study on the Detection of Generated Images and the Impact of Prompts [5.222694057785324]
This work examines the influence of the prompt's level of detail on the detectability of fake images.
We create a novel dataset, COCOXGEN, which consists of real photos from the COCO dataset as well as images generated with SDXL and Fooocus.
Our user study with 200 participants shows that images generated with longer, more detailed prompts are detected significantly more easily than those generated with short prompts.
arXiv Detail & Related papers (2024-12-12T20:37:52Z) - Detecting Discrepancies Between AI-Generated and Natural Images Using Uncertainty [91.64626435585643]
We propose a novel approach for detecting AI-generated images by leveraging predictive uncertainty to mitigate misuse and associated risks.
The motivation arises from the fundamental assumption regarding the distributional discrepancy between natural and AI-generated images.
We propose to leverage large-scale pre-trained models to calculate the uncertainty as the score for detecting AI-generated images.
arXiv Detail & Related papers (2024-12-08T11:32:25Z) - Zero-Shot Detection of AI-Generated Images [54.01282123570917]
We propose a zero-shot entropy-based detector (ZED) to detect AI-generated images.
Inspired by recent works on machine-generated text detection, our idea is to measure how surprising the image under analysis is compared to a model of real images.
ZED achieves an average improvement of more than 3% over the SoTA in terms of accuracy.
arXiv Detail & Related papers (2024-09-24T08:46:13Z) - Analysis of Human Perception in Distinguishing Real and AI-Generated Faces: An Eye-Tracking Based Study [6.661332913985627]
We investigate how humans perceive and distinguish between real and fake images.
Our analysis of StyleGAN-3 generated images reveals that participants can distinguish real from fake faces with an average accuracy of 76.80%.
arXiv Detail & Related papers (2024-09-23T19:34:30Z) - A Sanity Check for AI-generated Image Detection [49.08585395873425]
We propose AIDE (AI-generated Image DEtector with Hybrid Features) to detect AI-generated images.
AIDE achieves +3.5% and +4.6% improvements to state-of-the-art methods.
arXiv Detail & Related papers (2024-06-27T17:59:49Z) - Development of a Dual-Input Neural Model for Detecting AI-Generated Imagery [0.0]
It is important to develop tools that are able to detect AI-generated images.
This paper proposes a dual-branch neural network architecture that takes both images and their Fourier frequency decomposition as inputs.
Our proposed model achieves an accuracy of 94% on the CIFAKE dataset, which significantly outperforms classic ML methods and CNNs.
arXiv Detail & Related papers (2024-06-19T16:42:04Z) - Invisible Relevance Bias: Text-Image Retrieval Models Prefer AI-Generated Images [67.18010640829682]
We show that AI-generated images introduce an invisible relevance bias to text-image retrieval models.
The inclusion of AI-generated images in the training data of the retrieval models exacerbates the invisible relevance bias.
We propose an effective training method aimed at alleviating the invisible relevance bias.
arXiv Detail & Related papers (2023-11-23T16:22:58Z) - The Value of AI Guidance in Human Examination of Synthetically-Generated
Faces [4.144518961834414]
We investigate whether human-guided synthetic face detectors can assist non-expert human operators in the task of synthetic image detection.
We conducted a large-scale experiment with more than 1,560 subjects classifying whether an image shows an authentic or synthetically-generated face.
Models trained with human-guidance offer better support to human examination of face images when compared to models trained traditionally using cross-entropy loss.
arXiv Detail & Related papers (2022-08-22T18:45:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.