Self-Supervised Learning for Detecting AI-Generated Faces as Anomalies
- URL: http://arxiv.org/abs/2501.02207v1
- Date: Sat, 04 Jan 2025 06:23:24 GMT
- Title: Self-Supervised Learning for Detecting AI-Generated Faces as Anomalies
- Authors: Mian Zou, Baosheng Yu, Yibing Zhan, Kede Ma,
- Abstract summary: We describe an anomaly detection method for AI-generated faces by leveraging self-supervised learning of camera-intrinsic and face-specific features purely from photographic face images.
The success of our method lies in designing a pretext task that trains a feature extractor to rank four ordinal exchangeable image file format (EXIF) tags and classify artificially manipulated face images.
- Score: 58.11545090128854
- License:
- Abstract: The detection of AI-generated faces is commonly approached as a binary classification task. Nevertheless, the resulting detectors frequently struggle to adapt to novel AI face generators, which evolve rapidly. In this paper, we describe an anomaly detection method for AI-generated faces by leveraging self-supervised learning of camera-intrinsic and face-specific features purely from photographic face images. The success of our method lies in designing a pretext task that trains a feature extractor to rank four ordinal exchangeable image file format (EXIF) tags and classify artificially manipulated face images. Subsequently, we model the learned feature distribution of photographic face images using a Gaussian mixture model. Faces with low likelihoods are flagged as AI-generated. Both quantitative and qualitative experiments validate the effectiveness of our method. Our code is available at \url{https://github.com/MZMMSEC/AIGFD_EXIF.git}.
Related papers
- Detecting Discrepancies Between AI-Generated and Natural Images Using Uncertainty [91.64626435585643]
We propose a novel approach for detecting AI-generated images by leveraging predictive uncertainty to mitigate misuse and associated risks.
The motivation arises from the fundamental assumption regarding the distributional discrepancy between natural and AI-generated images.
We propose to leverage large-scale pre-trained models to calculate the uncertainty as the score for detecting AI-generated images.
arXiv Detail & Related papers (2024-12-08T11:32:25Z) - A Sanity Check for AI-generated Image Detection [49.08585395873425]
We propose AIDE (AI-generated Image DEtector with Hybrid Features) to detect AI-generated images.
AIDE achieves +3.5% and +4.6% improvements to state-of-the-art methods.
arXiv Detail & Related papers (2024-06-27T17:59:49Z) - RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection [60.960988614701414]
RIGID is a training-free and model-agnostic method for robust AI-generated image detection.
RIGID significantly outperforms existing trainingbased and training-free detectors.
arXiv Detail & Related papers (2024-05-30T14:49:54Z) - Finding AI-Generated Faces in the Wild [9.390562437823078]
We focus on a more narrow task of distinguishing a real face from an AI-generated face.
This is particularly applicable when tackling inauthentic online accounts with a fake user profile photo.
We show that by focusing on only faces, a more resilient and general-purpose artifact can be detected.
arXiv Detail & Related papers (2023-11-14T22:46:01Z) - Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - FACE-AUDITOR: Data Auditing in Facial Recognition Systems [24.082527732931677]
Few-shot-based facial recognition systems have gained increasing attention due to their scalability and ability to work with a few face images.
To prevent the face images from being misused, one straightforward approach is to modify the raw face images before sharing them.
We propose a complete toolkit FACE-AUDITOR that can query the few-shot-based facial recognition model and determine whether any of a user's face images is used in training the model.
arXiv Detail & Related papers (2023-04-05T23:03:54Z) - Deepfake Forensics via An Adversarial Game [99.84099103679816]
We advocate adversarial training for improving the generalization ability to both unseen facial forgeries and unseen image/video qualities.
Considering that AI-based face manipulation often leads to high-frequency artifacts that can be easily spotted by models yet difficult to generalize, we propose a new adversarial training method that attempts to blur out these specific artifacts.
arXiv Detail & Related papers (2021-03-25T02:20:08Z) - One-Shot GAN Generated Fake Face Detection [3.3707422585608953]
We propose a universal One-Shot GAN generated fake face detection method.
The proposed method is based on extracting out-of-context objects from faces via scene understanding models.
Our experiments show that, we can discriminate fake faces from real ones in terms of out-of-context features.
arXiv Detail & Related papers (2020-03-27T05:51:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.