More Real than Real: A Study on Human Visual Perception of Synthetic
Faces
- URL: http://arxiv.org/abs/2106.07226v1
- Date: Mon, 14 Jun 2021 08:27:25 GMT
- Title: More Real than Real: A Study on Human Visual Perception of Synthetic
Faces
- Authors: Federica Lago, Cecilia Pasquini, Rainer B\"ohme, H\'el\`ene Dumont,
Val\'erie Goffaux and Giulia Boato
- Abstract summary: We describe a perceptual experiment where volunteers have been exposed to synthetic face images produced by state-of-the-art Generative Adversarial Networks.
Experiment outcomes reveal how strongly we should call into question our human ability to discriminate real faces from synthetic ones generated through modern AI.
- Score: 7.25613186882905
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep fakes became extremely popular in the last years, also thanks to their
increasing realism. Therefore, there is the need to measures human's ability to
distinguish between real and synthetic face images when confronted with
cutting-edge creation technologies. We describe the design and results of a
perceptual experiment we have conducted, where a wide and diverse group of
volunteers has been exposed to synthetic face images produced by
state-of-the-art Generative Adversarial Networks (namely, PG-GAN, StyleGAN,
StyleGAN2). The experiment outcomes reveal how strongly we should call into
question our human ability to discriminate real faces from synthetic ones
generated through modern AI.
Related papers
- Analysis of Human Perception in Distinguishing Real and AI-Generated Faces: An Eye-Tracking Based Study [6.661332913985627]
We investigate how humans perceive and distinguish between real and fake images.
Our analysis of StyleGAN-3 generated images reveals that participants can distinguish real from fake faces with an average accuracy of 76.80%.
arXiv Detail & Related papers (2024-09-23T19:34:30Z) - VIGFace: Virtual Identity Generation Model for Face Image Synthesis [13.81887339529775]
We propose VIGFace, a novel framework capable of generating synthetic facial images.
It allows for creating virtual facial images without concerns about portrait rights.
It serves as an effective augmentation method by incorporating real existing images.
arXiv Detail & Related papers (2024-03-13T06:11:41Z) - InceptionHuman: Controllable Prompt-to-NeRF for Photorealistic 3D Human Generation [61.62346472443454]
InceptionHuman is a prompt-to-NeRF framework that allows easy control via a combination of prompts in different modalities to generate photorealistic 3D humans.
InceptionHuman achieves consistent 3D human generation within a progressively refined NeRF space.
arXiv Detail & Related papers (2023-11-27T15:49:41Z) - Synthesizing Photorealistic Virtual Humans Through Cross-modal
Disentanglement [0.8959668207214765]
We propose an end-to-end framework for synthesizing high-quality virtual human faces capable of speaking with accurate lip motion.
Our method runs in real-time, and is able to deliver superior results compared to the current state-of-the-art.
arXiv Detail & Related papers (2022-09-03T03:56:49Z) - Open-Eye: An Open Platform to Study Human Performance on Identifying
AI-Synthesized Faces [51.56417104929796]
We develop an online platform called Open-eye to study the human performance of AI-synthesized faces detection.
We describe the design and workflow of the Open-eye in this paper.
arXiv Detail & Related papers (2022-05-13T14:30:59Z) - A Study of the Human Perception of Synthetic Faces [10.058235580923583]
We introduce a study of the human perception of synthetic faces generated using different strategies including a state-of-the-art deep learning-based GAN model.
We answer important questions such as how often do GAN-based and more traditional image processing-based techniques confuse human observers, and are there subtle cues within a synthetic face image that cause humans to perceive it as a fake without having to search for obvious clues?
arXiv Detail & Related papers (2021-11-08T02:03:18Z) - SynFace: Face Recognition with Synthetic Data [83.15838126703719]
We devise the SynFace with identity mixup (IM) and domain mixup (DM) to mitigate the performance gap.
We also perform a systematically empirical analysis on synthetic face images to provide some insights on how to effectively utilize synthetic data for face recognition.
arXiv Detail & Related papers (2021-08-18T03:41:54Z) - Style and Pose Control for Image Synthesis of Humans from a Single
Monocular View [78.6284090004218]
StylePoseGAN is a non-controllable generator to accept conditioning of pose and appearance separately.
Our network can be trained in a fully supervised way with human images to disentangle pose, appearance and body parts.
StylePoseGAN achieves state-of-the-art image generation fidelity on common perceptual metrics.
arXiv Detail & Related papers (2021-02-22T18:50:47Z) - InterFaceGAN: Interpreting the Disentangled Face Representation Learned
by GANs [73.27299786083424]
We propose a framework called InterFaceGAN to interpret the disentangled face representation learned by state-of-the-art GAN models.
We first find that GANs learn various semantics in some linear subspaces of the latent space.
We then conduct a detailed study on the correlation between different semantics and manage to better disentangle them via subspace projection.
arXiv Detail & Related papers (2020-05-18T18:01:22Z) - Learning Inverse Rendering of Faces from Real-world Videos [52.313931830408386]
Existing methods decompose a face image into three components (albedo, normal, and illumination) by supervised training on synthetic data.
We propose a weakly supervised training approach to train our model on real face videos, based on the assumption of consistency of albedo and normal.
Our network is trained on both real and synthetic data, benefiting from both.
arXiv Detail & Related papers (2020-03-26T17:26:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.