Analysis of Human Perception in Distinguishing Real and AI-Generated Faces: An Eye-Tracking Based Study
- URL: http://arxiv.org/abs/2409.15498v1
- Date: Mon, 23 Sep 2024 19:34:30 GMT
- Title: Analysis of Human Perception in Distinguishing Real and AI-Generated Faces: An Eye-Tracking Based Study
- Authors: Jin Huang, Subhadra Gopalakrishnan, Trisha Mittal, Jake Zuena, Jaclyn Pytlarz,
- Abstract summary: We investigate how humans perceive and distinguish between real and fake images.
Our analysis of StyleGAN-3 generated images reveals that participants can distinguish real from fake faces with an average accuracy of 76.80%.
- Score: 6.661332913985627
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in Artificial Intelligence have led to remarkable improvements in generating realistic human faces. While these advancements demonstrate significant progress in generative models, they also raise concerns about the potential misuse of these generated images. In this study, we investigate how humans perceive and distinguish between real and fake images. We designed a perceptual experiment using eye-tracking technology to analyze how individuals differentiate real faces from those generated by AI. Our analysis of StyleGAN-3 generated images reveals that participants can distinguish real from fake faces with an average accuracy of 76.80%. Additionally, we found that participants scrutinize images more closely when they suspect an image to be fake. We believe this study offers valuable insights into human perception of AI-generated media.
Related papers
- Unveiling the Truth: Exploring Human Gaze Patterns in Fake Images [34.02058539403381]
We leverage human semantic knowledge to investigate the possibility of being included in frameworks of fake image detection.
A preliminary statistical analysis is conducted to explore the distinctive patterns in how humans perceive genuine and altered images.
arXiv Detail & Related papers (2024-03-13T19:56:30Z) - Exploring the Naturalness of AI-Generated Images [59.04528584651131]
We take the first step to benchmark and assess the visual naturalness of AI-generated images.
We propose the Joint Objective Image Naturalness evaluaTor (JOINT), to automatically predict the naturalness of AGIs that aligns human ratings.
We demonstrate that JOINT significantly outperforms baselines for providing more subjectively consistent results on naturalness assessment.
arXiv Detail & Related papers (2023-12-09T06:08:09Z) - Seeing is not always believing: Benchmarking Human and Model Perception
of AI-Generated Images [66.20578637253831]
There is a growing concern that the advancement of artificial intelligence (AI) technology may produce fake photos.
This study aims to comprehensively evaluate agents for distinguishing state-of-the-art AI-generated visual content.
arXiv Detail & Related papers (2023-04-25T17:51:59Z) - The Value of AI Guidance in Human Examination of Synthetically-Generated
Faces [4.144518961834414]
We investigate whether human-guided synthetic face detectors can assist non-expert human operators in the task of synthetic image detection.
We conducted a large-scale experiment with more than 1,560 subjects classifying whether an image shows an authentic or synthetically-generated face.
Models trained with human-guidance offer better support to human examination of face images when compared to models trained traditionally using cross-entropy loss.
arXiv Detail & Related papers (2022-08-22T18:45:53Z) - Open-Eye: An Open Platform to Study Human Performance on Identifying
AI-Synthesized Faces [51.56417104929796]
We develop an online platform called Open-eye to study the human performance of AI-synthesized faces detection.
We describe the design and workflow of the Open-eye in this paper.
arXiv Detail & Related papers (2022-05-13T14:30:59Z) - Evaluation of Human and Machine Face Detection using a Novel Distinctive
Human Appearance Dataset [0.76146285961466]
We evaluate current state-of-the-art face-detection models in their ability to detect faces in images.
The evaluation results show that face-detection algorithms do not generalize well to diverse appearances.
arXiv Detail & Related papers (2021-11-01T02:20:40Z) - SynFace: Face Recognition with Synthetic Data [83.15838126703719]
We devise the SynFace with identity mixup (IM) and domain mixup (DM) to mitigate the performance gap.
We also perform a systematically empirical analysis on synthetic face images to provide some insights on how to effectively utilize synthetic data for face recognition.
arXiv Detail & Related papers (2021-08-18T03:41:54Z) - More Real than Real: A Study on Human Visual Perception of Synthetic
Faces [7.25613186882905]
We describe a perceptual experiment where volunteers have been exposed to synthetic face images produced by state-of-the-art Generative Adversarial Networks.
Experiment outcomes reveal how strongly we should call into question our human ability to discriminate real faces from synthetic ones generated through modern AI.
arXiv Detail & Related papers (2021-06-14T08:27:25Z) - Deepfake Forensics via An Adversarial Game [99.84099103679816]
We advocate adversarial training for improving the generalization ability to both unseen facial forgeries and unseen image/video qualities.
Considering that AI-based face manipulation often leads to high-frequency artifacts that can be easily spotted by models yet difficult to generalize, we propose a new adversarial training method that attempts to blur out these specific artifacts.
arXiv Detail & Related papers (2021-03-25T02:20:08Z) - Facial Expressions as a Vulnerability in Face Recognition [73.85525896663371]
This work explores facial expression bias as a security vulnerability of face recognition systems.
We present a comprehensive analysis of how facial expression bias impacts the performance of face recognition technologies.
arXiv Detail & Related papers (2020-11-17T18:12:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.