What makes fake images detectable? Understanding properties that
generalize
- URL: http://arxiv.org/abs/2008.10588v1
- Date: Mon, 24 Aug 2020 17:50:28 GMT
- Title: What makes fake images detectable? Understanding properties that
generalize
- Authors: Lucy Chai, David Bau, Ser-Nam Lim, Phillip Isola
- Abstract summary: Deep networks can still pick up on subtle artifacts in doctored images.
We seek to understand what properties of fake images make them detectable.
We show a technique to exaggerate these detectable properties.
- Score: 55.4211069143719
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The quality of image generation and manipulation is reaching impressive
levels, making it increasingly difficult for a human to distinguish between
what is real and what is fake. However, deep networks can still pick up on the
subtle artifacts in these doctored images. We seek to understand what
properties of fake images make them detectable and identify what generalizes
across different model architectures, datasets, and variations in training. We
use a patch-based classifier with limited receptive fields to visualize which
regions of fake images are more easily detectable. We further show a technique
to exaggerate these detectable properties and demonstrate that, even when the
image generator is adversarially finetuned against a fake image classifier, it
is still imperfect and leaves detectable artifacts in certain image patches.
Code is available at https://chail.github.io/patch-forensics/.
Related papers
- On the Effectiveness of Dataset Alignment for Fake Image Detection [28.68129042301801]
A good detector should focus on the generative models fingerprints while ignoring image properties such as semantic content, resolution, file format, etc.
In this work, we argue that in addition to these algorithmic choices, we also require a well aligned dataset of real/fake images to train a robust detector.
For the family of LDMs, we propose a very simple way to achieve this: we reconstruct all the real images using the LDMs autoencoder, without any denoising operation. We then train a model to separate these real images from their reconstructions.
arXiv Detail & Related papers (2024-10-15T17:58:07Z) - Semantic Contextualization of Face Forgery: A New Definition, Dataset, and Detection Method [77.65459419417533]
We put face forgery in a semantic context and define that computational methods that alter semantic face attributes are sources of face forgery.
We construct a large face forgery image dataset, where each image is associated with a set of labels organized in a hierarchical graph.
We propose a semantics-oriented face forgery detection method that captures label relations and prioritizes the primary task.
arXiv Detail & Related papers (2024-05-14T10:24:19Z) - FakeBench: Probing Explainable Fake Image Detection via Large Multimodal Models [62.66610648697744]
We introduce a taxonomy of generative visual forgery concerning human perception, based on which we collect forgery descriptions in human natural language.
FakeBench examines LMMs with four evaluation criteria: detection, reasoning, interpretation and fine-grained forgery analysis.
This research presents a paradigm shift towards transparency for the fake image detection area.
arXiv Detail & Related papers (2024-04-20T07:28:55Z) - Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image Detectors [24.78672820633581]
Deep generative models can create remarkably fake images while raising concerns about misinformation and copyright infringement.
Deepfake detection technique is developed to distinguish between real and fake images.
We propose a novel approach called AntifakePrompt, using Vision-Language Models and prompt tuning techniques.
arXiv Detail & Related papers (2023-10-26T14:23:45Z) - Deepfake Detection of Occluded Images Using a Patch-based Approach [1.6114012813668928]
We present a deep learning approach using the entire face and face patches to distinguish real/fake images in the presence of obstruction.
For producing fake images, StyleGAN and StyleGAN2 are trained by FFHQ images and also StarGAN and PGGAN are trained by CelebA images.
The proposed approach reaches higher results in early epochs than other methods and increases the SoTA results by 0.4%-7.9% in the different built data-sets.
arXiv Detail & Related papers (2023-04-10T12:12:14Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Towards Universal Fake Image Detectors that Generalize Across Generative Models [36.18427140427858]
We show that the existing paradigm, which consists of training a deep network for real-vs-fake classification, fails to detect fake images from newer breeds of generative models.
We propose to perform real-vs-fake classification without learning, using a feature space not explicitly trained to distinguish real from fake images.
arXiv Detail & Related papers (2023-02-20T18:59:04Z) - Identifying Invariant Texture Violation for Robust Deepfake Detection [17.306386179823576]
We propose the Invariant Texture Learning framework, which only accesses the published dataset with low visual quality.
Our method is based on the prior that the microscopic facial texture of the source face is inevitably violated by the texture transferred from the target person.
arXiv Detail & Related papers (2020-12-19T03:02:15Z) - Identity-Driven DeepFake Detection [91.0504621868628]
Identity-Driven DeepFake Detection takes as input the suspect image/video as well as the target identity information.
We output a decision on whether the identity in the suspect image/video is the same as the target identity.
We present a simple identity-based detection algorithm called the OuterFace, which may serve as a baseline for further research.
arXiv Detail & Related papers (2020-12-07T18:59:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.