Leveraging Frequency Analysis for Deep Fake Image Recognition
- URL: http://arxiv.org/abs/2003.08685v3
- Date: Fri, 26 Jun 2020 11:21:45 GMT
- Title: Leveraging Frequency Analysis for Deep Fake Image Recognition
- Authors: Joel Frank, Thorsten Eisenhofer, Lea Sch\"onherr, Asja Fischer,
Dorothea Kolossa, Thorsten Holz
- Abstract summary: Deep neural networks can generate images that are astonishingly realistic, so much so that it is often hard for humans to distinguish them from actual photos.
These achievements have been largely made possible by Generative Adversarial Networks (GANs)
In this paper, we show that in frequency space, GAN-generated images exhibit severe artifacts that can be easily identified.
- Score: 35.1862941141084
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks can generate images that are astonishingly realistic, so
much so that it is often hard for humans to distinguish them from actual
photos. These achievements have been largely made possible by Generative
Adversarial Networks (GANs). While deep fake images have been thoroughly
investigated in the image domain - a classical approach from the area of image
forensics - an analysis in the frequency domain has been missing so far. In
this paper, we address this shortcoming and our results reveal that in
frequency space, GAN-generated images exhibit severe artifacts that can be
easily identified. We perform a comprehensive analysis, showing that these
artifacts are consistent across different neural network architectures, data
sets, and resolutions. In a further investigation, we demonstrate that these
artifacts are caused by upsampling operations found in all current GAN
architectures, indicating a structural and fundamental problem in the way
images are generated via GANs. Based on this analysis, we demonstrate how the
frequency representation can be used to identify deep fake images in an
automated way, surpassing state-of-the-art methods.
Related papers
- Rethinking the Up-Sampling Operations in CNN-based Generative Network
for Generalizable Deepfake Detection [86.97062579515833]
We introduce the concept of Neighboring Pixel Relationships(NPR) as a means to capture and characterize the generalized structural artifacts stemming from up-sampling operations.
A comprehensive analysis is conducted on an open-world dataset, comprising samples generated by tft28 distinct generative models.
This analysis culminates in the establishment of a novel state-of-the-art performance, showcasing a remarkable tft11.6% improvement over existing methods.
arXiv Detail & Related papers (2023-12-16T14:27:06Z) - Multi-Channel Cross Modal Detection of Synthetic Face Images [0.0]
Synthetically generated face images have shown to be indistinguishable from real images by humans.
New and improved generative models are proposed with rapid speed and arbitrary image post-processing can be applied.
We propose a multi-channel architecture for detecting entirely synthetic face images.
arXiv Detail & Related papers (2023-11-28T13:30:10Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Real Face Foundation Representation Learning for Generalized Deepfake
Detection [74.4691295738097]
The emergence of deepfake technologies has become a matter of social concern as they pose threats to individual privacy and public security.
It is almost impossible to collect sufficient representative fake faces, and it is hard for existing detectors to generalize to all types of manipulation.
We propose Real Face Foundation Representation Learning (RFFR), which aims to learn a general representation from large-scale real face datasets.
arXiv Detail & Related papers (2023-03-15T08:27:56Z) - Joint Learning of Deep Texture and High-Frequency Features for
Computer-Generated Image Detection [24.098604827919203]
We propose a joint learning strategy with deep texture and high-frequency features for CG image detection.
A semantic segmentation map is generated to guide the affine transformation operation.
The combination of the original image and the high-frequency components of the original and rendered images are fed into a multi-branch neural network equipped with attention mechanisms.
arXiv Detail & Related papers (2022-09-07T17:30:40Z) - Misleading Deep-Fake Detection with GAN Fingerprints [14.459389888856412]
We show that an adversary can remove indicative artifacts, the GAN fingerprint, directly from the frequency spectrum of a generated image.
Our results show that an adversary can often remove GAN fingerprints and thus evade the detection of generated images.
arXiv Detail & Related papers (2022-05-25T07:32:12Z) - Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis [69.09526348527203]
Deep generative models have led to highly realistic media, known as deepfakes, that are commonly indistinguishable from real to human eyes.
We propose a novel fake detection that is designed to re-synthesize testing images and extract visual cues for detection.
We demonstrate the improved effectiveness, cross-GAN generalization, and robustness against perturbations of our approach in a variety of detection scenarios.
arXiv Detail & Related papers (2021-05-29T21:22:24Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Fighting deepfakes by detecting GAN DCT anomalies [0.0]
State-of-the-art algorithms employ deep neural networks to detect fake contents.
A new fast detection method able to discriminate Deepfake images with high precision is proposed.
The method is innovative, exceeds the state-of-the-art and also gives many insights in terms of explainability.
arXiv Detail & Related papers (2021-01-24T19:45:11Z) - What makes fake images detectable? Understanding properties that
generalize [55.4211069143719]
Deep networks can still pick up on subtle artifacts in doctored images.
We seek to understand what properties of fake images make them detectable.
We show a technique to exaggerate these detectable properties.
arXiv Detail & Related papers (2020-08-24T17:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.