Finding AI-Generated Faces in the Wild
- URL: http://arxiv.org/abs/2311.08577v3
- Date: Fri, 5 Apr 2024 17:37:36 GMT
- Title: Finding AI-Generated Faces in the Wild
- Authors: Gonzalo J. Aniano Porcile, Jack Gindi, Shivansh Mundra, James R. Verbus, Hany Farid,
- Abstract summary: We focus on a more narrow task of distinguishing a real face from an AI-generated face.
This is particularly applicable when tackling inauthentic online accounts with a fake user profile photo.
We show that by focusing on only faces, a more resilient and general-purpose artifact can be detected.
- Score: 9.390562437823078
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI-based image generation has continued to rapidly improve, producing increasingly more realistic images with fewer obvious visual flaws. AI-generated images are being used to create fake online profiles which in turn are being used for spam, fraud, and disinformation campaigns. As the general problem of detecting any type of manipulated or synthesized content is receiving increasing attention, here we focus on a more narrow task of distinguishing a real face from an AI-generated face. This is particularly applicable when tackling inauthentic online accounts with a fake user profile photo. We show that by focusing on only faces, a more resilient and general-purpose artifact can be detected that allows for the detection of AI-generated faces from a variety of GAN- and diffusion-based synthesis engines, and across image resolutions (as low as 128 x 128 pixels) and qualities.
Related papers
- Self-Supervised Learning for Detecting AI-Generated Faces as Anomalies [58.11545090128854]
We describe an anomaly detection method for AI-generated faces by leveraging self-supervised learning of camera-intrinsic and face-specific features purely from photographic face images.
The success of our method lies in designing a pretext task that trains a feature extractor to rank four ordinal exchangeable image file format (EXIF) tags and classify artificially manipulated face images.
arXiv Detail & Related papers (2025-01-04T06:23:24Z) - Human vs. AI: A Novel Benchmark and a Comparative Study on the Detection of Generated Images and the Impact of Prompts [5.222694057785324]
This work examines the influence of the prompt's level of detail on the detectability of fake images.
We create a novel dataset, COCOXGEN, which consists of real photos from the COCO dataset as well as images generated with SDXL and Fooocus.
Our user study with 200 participants shows that images generated with longer, more detailed prompts are detected significantly more easily than those generated with short prompts.
arXiv Detail & Related papers (2024-12-12T20:37:52Z) - OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.
We propose OSDFace, a novel one-step diffusion model for face restoration.
Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - AI-Face: A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark [12.368133562194267]
We introduce the AI-Face dataset, the first million-scale demographically annotated AI-generated face image dataset.
Based on this dataset, we conduct the first comprehensive fairness benchmark to assess various AI face detectors.
arXiv Detail & Related papers (2024-06-02T15:51:33Z) - Towards the Detection of AI-Synthesized Human Face Images [12.090322373964124]
This paper presents a benchmark including human face images produced by Generative Adversarial Networks (GANs) and a variety of DMs.
Then, the forgery traces introduced by different generative models have been analyzed in the frequency domain to draw various insights.
The paper further demonstrates that a detector trained with frequency representation can generalize well to other unseen generative models.
arXiv Detail & Related papers (2024-02-13T19:37:44Z) - Generalized Face Liveness Detection via De-fake Face Generator [52.23271636362843]
Previous Face Anti-spoofing (FAS) methods face the challenge of generalizing to unseen domains.
We propose an Anomalous cue Guided FAS (AG-FAS) method, which can effectively leverage large-scale additional real faces.
Our method achieves state-of-the-art results under cross-domain evaluations with unseen scenarios and unknown presentation attacks.
arXiv Detail & Related papers (2024-01-17T06:59:32Z) - Open-Eye: An Open Platform to Study Human Performance on Identifying
AI-Synthesized Faces [51.56417104929796]
We develop an online platform called Open-eye to study the human performance of AI-synthesized faces detection.
We describe the design and workflow of the Open-eye in this paper.
arXiv Detail & Related papers (2022-05-13T14:30:59Z) - Detecting High-Quality GAN-Generated Face Images using Neural Networks [23.388645531702597]
We propose a new strategy to differentiate GAN-generated images from authentic images by leveraging spectral band discrepancies.
In particular, we enable the digital preservation of face images using the Cross-band co-occurrence matrix and spatial co-occurrence matrix.
We show that the performance boost is particularly significant and achieves more than 92% in different post-processing environments.
arXiv Detail & Related papers (2022-03-03T13:53:27Z) - End2End Occluded Face Recognition by Masking Corrupted Features [82.27588990277192]
State-of-the-art general face recognition models do not generalize well to occluded face images.
This paper presents a novel face recognition method that is robust to occlusions based on a single end-to-end deep neural network.
Our approach, named FROM (Face Recognition with Occlusion Masks), learns to discover the corrupted features from the deep convolutional neural networks, and clean them by the dynamically learned masks.
arXiv Detail & Related papers (2021-08-21T09:08:41Z) - Deepfake Forensics via An Adversarial Game [99.84099103679816]
We advocate adversarial training for improving the generalization ability to both unseen facial forgeries and unseen image/video qualities.
Considering that AI-based face manipulation often leads to high-frequency artifacts that can be easily spotted by models yet difficult to generalize, we propose a new adversarial training method that attempts to blur out these specific artifacts.
arXiv Detail & Related papers (2021-03-25T02:20:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.