Seeing is not always believing: Benchmarking Human and Model Perception
of AI-Generated Images
- URL: http://arxiv.org/abs/2304.13023v3
- Date: Fri, 22 Sep 2023 18:16:28 GMT
- Title: Seeing is not always believing: Benchmarking Human and Model Perception
of AI-Generated Images
- Authors: Zeyu Lu, Di Huang, Lei Bai, Jingjing Qu, Chengyue Wu, Xihui Liu, Wanli
Ouyang
- Abstract summary: There is a growing concern that the advancement of artificial intelligence (AI) technology may produce fake photos.
This study aims to comprehensively evaluate agents for distinguishing state-of-the-art AI-generated visual content.
- Score: 66.20578637253831
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Photos serve as a way for humans to record what they experience in their
daily lives, and they are often regarded as trustworthy sources of information.
However, there is a growing concern that the advancement of artificial
intelligence (AI) technology may produce fake photos, which can create
confusion and diminish trust in photographs. This study aims to comprehensively
evaluate agents for distinguishing state-of-the-art AI-generated visual
content. Our study benchmarks both human capability and cutting-edge fake image
detection AI algorithms, using a newly collected large-scale fake image dataset
Fake2M. In our human perception evaluation, titled HPBench, we discovered that
humans struggle significantly to distinguish real photos from AI-generated
ones, with a misclassification rate of 38.7%. Along with this, we conduct the
model capability of AI-Generated images detection evaluation MPBench and the
top-performing model from MPBench achieves a 13% failure rate under the same
setting used in the human evaluation. We hope that our study can raise
awareness of the potential risks of AI-generated images and facilitate further
research to prevent the spread of false information. More information can refer
to https://github.com/Inf-imagine/Sentry.
Related papers
- Zero-Shot Detection of AI-Generated Images [54.01282123570917]
We propose a zero-shot entropy-based detector (ZED) to detect AI-generated images.
Inspired by recent works on machine-generated text detection, our idea is to measure how surprising the image under analysis is compared to a model of real images.
ZED achieves an average improvement of more than 3% over the SoTA in terms of accuracy.
arXiv Detail & Related papers (2024-09-24T08:46:13Z) - Analysis of Human Perception in Distinguishing Real and AI-Generated Faces: An Eye-Tracking Based Study [6.661332913985627]
We investigate how humans perceive and distinguish between real and fake images.
Our analysis of StyleGAN-3 generated images reveals that participants can distinguish real from fake faces with an average accuracy of 76.80%.
arXiv Detail & Related papers (2024-09-23T19:34:30Z) - A Sanity Check for AI-generated Image Detection [49.08585395873425]
We present a sanity check on whether the task of AI-generated image detection has been solved.
To quantify the generalization of existing methods, we evaluate 9 off-the-shelf AI-generated image detectors on Chameleon dataset.
We propose AIDE (AI-generated Image DEtector with Hybrid Features), which leverages multiple experts to simultaneously extract visual artifacts and noise patterns.
arXiv Detail & Related papers (2024-06-27T17:59:49Z) - Development of a Dual-Input Neural Model for Detecting AI-Generated Imagery [0.0]
It is important to develop tools that are able to detect AI-generated images.
This paper proposes a dual-branch neural network architecture that takes both images and their Fourier frequency decomposition as inputs.
Our proposed model achieves an accuracy of 94% on the CIFAKE dataset, which significantly outperforms classic ML methods and CNNs.
arXiv Detail & Related papers (2024-06-19T16:42:04Z) - RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection [60.960988614701414]
RIGID is a training-free and model-agnostic method for robust AI-generated image detection.
RIGID significantly outperforms existing trainingbased and training-free detectors.
arXiv Detail & Related papers (2024-05-30T14:49:54Z) - Invisible Relevance Bias: Text-Image Retrieval Models Prefer AI-Generated Images [67.18010640829682]
We show that AI-generated images introduce an invisible relevance bias to text-image retrieval models.
The inclusion of AI-generated images in the training data of the retrieval models exacerbates the invisible relevance bias.
We propose an effective training method aimed at alleviating the invisible relevance bias.
arXiv Detail & Related papers (2023-11-23T16:22:58Z) - The Value of AI Guidance in Human Examination of Synthetically-Generated
Faces [4.144518961834414]
We investigate whether human-guided synthetic face detectors can assist non-expert human operators in the task of synthetic image detection.
We conducted a large-scale experiment with more than 1,560 subjects classifying whether an image shows an authentic or synthetically-generated face.
Models trained with human-guidance offer better support to human examination of face images when compared to models trained traditionally using cross-entropy loss.
arXiv Detail & Related papers (2022-08-22T18:45:53Z) - Deepfake Forensics via An Adversarial Game [99.84099103679816]
We advocate adversarial training for improving the generalization ability to both unseen facial forgeries and unseen image/video qualities.
Considering that AI-based face manipulation often leads to high-frequency artifacts that can be easily spotted by models yet difficult to generalize, we propose a new adversarial training method that attempts to blur out these specific artifacts.
arXiv Detail & Related papers (2021-03-25T02:20:08Z) - Cognitive Anthropomorphism of AI: How Humans and Computers Classify
Images [0.0]
Humans engage in cognitive anthropomorphism: expecting AI to have the same nature as human intelligence.
This mismatch presents an obstacle to appropriate human-AI interaction.
I offer three strategies for system design that can address the mismatch between human and AI classification.
arXiv Detail & Related papers (2020-02-07T21:49:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.