The Value of AI Guidance in Human Examination of Synthetically-Generated
Faces
- URL: http://arxiv.org/abs/2208.10544v1
- Date: Mon, 22 Aug 2022 18:45:53 GMT
- Title: The Value of AI Guidance in Human Examination of Synthetically-Generated
Faces
- Authors: Aidan Boyd, Patrick Tinsley, Kevin Bowyer, Adam Czajka
- Abstract summary: We investigate whether human-guided synthetic face detectors can assist non-expert human operators in the task of synthetic image detection.
We conducted a large-scale experiment with more than 1,560 subjects classifying whether an image shows an authentic or synthetically-generated face.
Models trained with human-guidance offer better support to human examination of face images when compared to models trained traditionally using cross-entropy loss.
- Score: 4.144518961834414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face image synthesis has progressed beyond the point at which humans can
effectively distinguish authentic faces from synthetically generated ones.
Recently developed synthetic face image detectors boast "better-than-human"
discriminative ability, especially those guided by human perceptual
intelligence during the model's training process. In this paper, we investigate
whether these human-guided synthetic face detectors can assist non-expert human
operators in the task of synthetic image detection when compared to models
trained without human-guidance. We conducted a large-scale experiment with more
than 1,560 subjects classifying whether an image shows an authentic or
synthetically-generated face, and annotate regions that supported their
decisions. In total, 56,015 annotations across 3,780 unique face images were
collected. All subjects first examined samples without any AI support, followed
by samples given (a) the AI's decision ("synthetic" or "authentic"), (b) class
activation maps illustrating where the model deems salient for its decision, or
(c) both the AI's decision and AI's saliency map. Synthetic faces were
generated with six modern Generative Adversarial Networks. Interesting
observations from this experiment include: (1) models trained with
human-guidance offer better support to human examination of face images when
compared to models trained traditionally using cross-entropy loss, (2) binary
decisions presented to humans offers better support than saliency maps, (3)
understanding the AI's accuracy helps humans to increase trust in a given model
and thus increase their overall accuracy. This work demonstrates that although
humans supported by machines achieve better-than-random accuracy of synthetic
face detection, the ways of supplying humans with AI support and of building
trust are key factors determining high effectiveness of the human-AI tandem.
Related papers
- Analysis of Human Perception in Distinguishing Real and AI-Generated Faces: An Eye-Tracking Based Study [6.661332913985627]
We investigate how humans perceive and distinguish between real and fake images.
Our analysis of StyleGAN-3 generated images reveals that participants can distinguish real from fake faces with an average accuracy of 76.80%.
arXiv Detail & Related papers (2024-09-23T19:34:30Z) - HumanRefiner: Benchmarking Abnormal Human Generation and Refining with Coarse-to-fine Pose-Reversible Guidance [80.97360194728705]
AbHuman is the first large-scale synthesized human benchmark focusing on anatomical anomalies.
HumanRefiner is a novel plug-and-play approach for the coarse-to-fine refinement of human anomalies in text-to-image generation.
arXiv Detail & Related papers (2024-07-09T15:14:41Z) - HINT: Learning Complete Human Neural Representations from Limited Viewpoints [69.76947323932107]
We propose a NeRF-based algorithm able to learn a detailed and complete human model from limited viewing angles.
As a result, our method can reconstruct complete humans even from a few viewing angles, increasing performance by more than 15% PSNR.
arXiv Detail & Related papers (2024-05-30T05:43:09Z) - Towards the Detection of AI-Synthesized Human Face Images [12.090322373964124]
This paper presents a benchmark including human face images produced by Generative Adversarial Networks (GANs) and a variety of DMs.
Then, the forgery traces introduced by different generative models have been analyzed in the frequency domain to draw various insights.
The paper further demonstrates that a detector trained with frequency representation can generalize well to other unseen generative models.
arXiv Detail & Related papers (2024-02-13T19:37:44Z) - Exploring the Naturalness of AI-Generated Images [59.04528584651131]
We take the first step to benchmark and assess the visual naturalness of AI-generated images.
We propose the Joint Objective Image Naturalness evaluaTor (JOINT), to automatically predict the naturalness of AGIs that aligns human ratings.
We demonstrate that JOINT significantly outperforms baselines for providing more subjectively consistent results on naturalness assessment.
arXiv Detail & Related papers (2023-12-09T06:08:09Z) - Seeing is not always believing: Benchmarking Human and Model Perception
of AI-Generated Images [66.20578637253831]
There is a growing concern that the advancement of artificial intelligence (AI) technology may produce fake photos.
This study aims to comprehensively evaluate agents for distinguishing state-of-the-art AI-generated visual content.
arXiv Detail & Related papers (2023-04-25T17:51:59Z) - Uncalibrated Models Can Improve Human-AI Collaboration [10.106324182884068]
We show that presenting AI models as more confident than they actually are can improve human-AI performance.
We first learn a model for how humans incorporate AI advice using data from thousands of human interactions.
arXiv Detail & Related papers (2022-02-12T04:51:00Z) - A Study of the Human Perception of Synthetic Faces [10.058235580923583]
We introduce a study of the human perception of synthetic faces generated using different strategies including a state-of-the-art deep learning-based GAN model.
We answer important questions such as how often do GAN-based and more traditional image processing-based techniques confuse human observers, and are there subtle cues within a synthetic face image that cause humans to perceive it as a fake without having to search for obvious clues?
arXiv Detail & Related papers (2021-11-08T02:03:18Z) - SynFace: Face Recognition with Synthetic Data [83.15838126703719]
We devise the SynFace with identity mixup (IM) and domain mixup (DM) to mitigate the performance gap.
We also perform a systematically empirical analysis on synthetic face images to provide some insights on how to effectively utilize synthetic data for face recognition.
arXiv Detail & Related papers (2021-08-18T03:41:54Z) - Deepfake Forensics via An Adversarial Game [99.84099103679816]
We advocate adversarial training for improving the generalization ability to both unseen facial forgeries and unseen image/video qualities.
Considering that AI-based face manipulation often leads to high-frequency artifacts that can be easily spotted by models yet difficult to generalize, we propose a new adversarial training method that attempts to blur out these specific artifacts.
arXiv Detail & Related papers (2021-03-25T02:20:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.