Towards Universal Fake Image Detectors that Generalize Across Generative Models
- URL: http://arxiv.org/abs/2302.10174v2
- Date: Mon, 1 Apr 2024 04:00:31 GMT
- Title: Towards Universal Fake Image Detectors that Generalize Across Generative Models
- Authors: Utkarsh Ojha, Yuheng Li, Yong Jae Lee,
- Abstract summary: We show that the existing paradigm, which consists of training a deep network for real-vs-fake classification, fails to detect fake images from newer breeds of generative models.
We propose to perform real-vs-fake classification without learning, using a feature space not explicitly trained to distinguish real from fake images.
- Score: 36.18427140427858
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With generative models proliferating at a rapid rate, there is a growing need for general purpose fake image detectors. In this work, we first show that the existing paradigm, which consists of training a deep network for real-vs-fake classification, fails to detect fake images from newer breeds of generative models when trained to detect GAN fake images. Upon analysis, we find that the resulting classifier is asymmetrically tuned to detect patterns that make an image fake. The real class becomes a sink class holding anything that is not fake, including generated images from models not accessible during training. Building upon this discovery, we propose to perform real-vs-fake classification without learning; i.e., using a feature space not explicitly trained to distinguish real from fake images. We use nearest neighbor and linear probing as instantiations of this idea. When given access to the feature space of a large pretrained vision-language model, the very simple baseline of nearest neighbor classification has surprisingly good generalization ability in detecting fake images from a wide variety of generative models; e.g., it improves upon the SoTA by +15.07 mAP and +25.90% acc when tested on unseen diffusion and autoregressive models.
Related papers
- On the Effectiveness of Dataset Alignment for Fake Image Detection [28.68129042301801]
A good detector should focus on the generative models fingerprints while ignoring image properties such as semantic content, resolution, file format, etc.
In this work, we argue that in addition to these algorithmic choices, we also require a well aligned dataset of real/fake images to train a robust detector.
For the family of LDMs, we propose a very simple way to achieve this: we reconstruct all the real images using the LDMs autoencoder, without any denoising operation. We then train a model to separate these real images from their reconstructions.
arXiv Detail & Related papers (2024-10-15T17:58:07Z) - Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image Detectors [24.78672820633581]
Deep generative models can create remarkably fake images while raising concerns about misinformation and copyright infringement.
Deepfake detection technique is developed to distinguish between real and fake images.
We propose a novel approach called AntifakePrompt, using Vision-Language Models and prompt tuning techniques.
arXiv Detail & Related papers (2023-10-26T14:23:45Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Real Face Foundation Representation Learning for Generalized Deepfake
Detection [74.4691295738097]
The emergence of deepfake technologies has become a matter of social concern as they pose threats to individual privacy and public security.
It is almost impossible to collect sufficient representative fake faces, and it is hard for existing detectors to generalize to all types of manipulation.
We propose Real Face Foundation Representation Learning (RFFR), which aims to learn a general representation from large-scale real face datasets.
arXiv Detail & Related papers (2023-03-15T08:27:56Z) - DA-FDFtNet: Dual Attention Fake Detection Fine-tuning Network to Detect
Various AI-Generated Fake Images [21.030153777110026]
It has been much easier to create fake images such as "Deepfakes"
Recent research has introduced few-shot learning, which uses a small amount of training data to produce fake images and videos more effectively.
In this work, we propose Dual Attention Fine-tuning Network (DA-tNet) to detect the manipulated fake face images.
arXiv Detail & Related papers (2021-12-22T16:25:24Z) - CutPaste: Self-Supervised Learning for Anomaly Detection and
Localization [59.719925639875036]
We propose a framework for building anomaly detectors using normal training data only.
We first learn self-supervised deep representations and then build a generative one-class classifier on learned representations.
Our empirical study on MVTec anomaly detection dataset demonstrates the proposed algorithm is general to be able to detect various types of real-world defects.
arXiv Detail & Related papers (2021-04-08T19:04:55Z) - What makes fake images detectable? Understanding properties that
generalize [55.4211069143719]
Deep networks can still pick up on subtle artifacts in doctored images.
We seek to understand what properties of fake images make them detectable.
We show a technique to exaggerate these detectable properties.
arXiv Detail & Related papers (2020-08-24T17:50:28Z) - One-Shot GAN Generated Fake Face Detection [3.3707422585608953]
We propose a universal One-Shot GAN generated fake face detection method.
The proposed method is based on extracting out-of-context objects from faces via scene understanding models.
Our experiments show that, we can discriminate fake faces from real ones in terms of out-of-context features.
arXiv Detail & Related papers (2020-03-27T05:51:14Z) - FDFtNet: Facing Off Fake Images using Fake Detection Fine-tuning Network [19.246576904646172]
We propose a light-weight fine-tuning neural network-based architecture called FaketNet.
Our approach aims to reuse popular pre-trained models with only a few images for fine-tuning to effectively detect fake images.
Our tNet achieves an overall accuracy of 9029% in detecting fake images generated from the GANs-based dataset.
arXiv Detail & Related papers (2020-01-05T16:04:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.