Misleading Deep-Fake Detection with GAN Fingerprints
- URL: http://arxiv.org/abs/2205.12543v1
- Date: Wed, 25 May 2022 07:32:12 GMT
- Title: Misleading Deep-Fake Detection with GAN Fingerprints
- Authors: Vera Wesselkamp and Konrad Rieck and Daniel Arp and Erwin Quiring
- Abstract summary: We show that an adversary can remove indicative artifacts, the GAN fingerprint, directly from the frequency spectrum of a generated image.
Our results show that an adversary can often remove GAN fingerprints and thus evade the detection of generated images.
- Score: 14.459389888856412
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Generative adversarial networks (GANs) have made remarkable progress in
synthesizing realistic-looking images that effectively outsmart even humans.
Although several detection methods can recognize these deep fakes by checking
for image artifacts from the generation process, multiple counterattacks have
demonstrated their limitations. These attacks, however, still require certain
conditions to hold, such as interacting with the detection method or adjusting
the GAN directly. In this paper, we introduce a novel class of simple
counterattacks that overcomes these limitations. In particular, we show that an
adversary can remove indicative artifacts, the GAN fingerprint, directly from
the frequency spectrum of a generated image. We explore different realizations
of this removal, ranging from filtering high frequencies to more nuanced
frequency-peak cleansing. We evaluate the performance of our attack with
different detection methods, GAN architectures, and datasets. Our results show
that an adversary can often remove GAN fingerprints and thus evade the
detection of generated images.
Related papers
- Deepfake Detection without Deepfakes: Generalization via Synthetic Frequency Patterns Injection [12.33030785907372]
Deepfake detectors are typically trained on large sets of pristine and generated images.
Deepfake detectors excel at identifying deepfakes created through methods encountered during training but struggle with those generated by unknown techniques.
This paper introduces a learning approach aimed at significantly enhancing the generalization capabilities of deepfake detectors.
arXiv Detail & Related papers (2024-03-20T10:33:10Z) - Rethinking the Up-Sampling Operations in CNN-based Generative Network
for Generalizable Deepfake Detection [86.97062579515833]
We introduce the concept of Neighboring Pixel Relationships(NPR) as a means to capture and characterize the generalized structural artifacts stemming from up-sampling operations.
A comprehensive analysis is conducted on an open-world dataset, comprising samples generated by tft28 distinct generative models.
This analysis culminates in the establishment of a novel state-of-the-art performance, showcasing a remarkable tft11.6% improvement over existing methods.
arXiv Detail & Related papers (2023-12-16T14:27:06Z) - MMNet: Multi-Collaboration and Multi-Supervision Network for Sequential
Deepfake Detection [81.59191603867586]
Sequential deepfake detection aims to identify forged facial regions with the correct sequence for recovery.
The recovery of forged images requires knowledge of the manipulation model to implement inverse transformations.
We propose Multi-Collaboration and Multi-Supervision Network (MMNet) that handles various spatial scales and sequential permutations in forged face images.
arXiv Detail & Related papers (2023-07-06T02:32:08Z) - Black-Box Attack against GAN-Generated Image Detector with Contrastive
Perturbation [0.4297070083645049]
We propose a new black-box attack method against GAN-generated image detectors.
A novel contrastive learning strategy is adopted to train the encoder-decoder network based anti-forensic model.
The proposed attack effectively reduces the accuracy of three state-of-the-art detectors on six popular GANs.
arXiv Detail & Related papers (2022-11-07T12:56:14Z) - Exploring Frequency Adversarial Attacks for Face Forgery Detection [59.10415109589605]
We propose a frequency adversarial attack method against face forgery detectors.
Inspired by the idea of meta-learning, we also propose a hybrid adversarial attack that performs attacks in both the spatial and frequency domains.
arXiv Detail & Related papers (2022-03-29T15:34:13Z) - FrePGAN: Robust Deepfake Detection Using Frequency-level Perturbations [12.027711542565315]
We design a framework to generalize the deepfake detector for both the known and unseen GAN models.
Our framework generates the frequency-level perturbation maps to make the generated images indistinguishable from the real images.
For experiments, we design new test scenarios varying from the training settings in GAN models, color manipulations, and object categories.
arXiv Detail & Related papers (2022-02-07T16:45:11Z) - Self-supervised GAN Detector [10.963740942220168]
generative models can be abused with malicious purposes, such as fraud, defamation, and fake news.
We propose a novel framework to distinguish the unseen generated images outside of the training settings.
Our proposed method is composed of the artificial fingerprint generator reconstructing the high-quality artificial fingerprints of GAN images.
arXiv Detail & Related papers (2021-11-12T06:19:04Z) - Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis [69.09526348527203]
Deep generative models have led to highly realistic media, known as deepfakes, that are commonly indistinguishable from real to human eyes.
We propose a novel fake detection that is designed to re-synthesize testing images and extract visual cues for detection.
We demonstrate the improved effectiveness, cross-GAN generalization, and robustness against perturbations of our approach in a variety of detection scenarios.
arXiv Detail & Related papers (2021-05-29T21:22:24Z) - Representative Forgery Mining for Fake Face Detection [52.896286647898386]
We propose an attention-based data augmentation framework to guide detector refine and enlarge its attention.
Our method tracks and occludes the Top-N sensitive facial regions, encouraging the detector to mine deeper into the regions ignored before for more representative forgery.
arXiv Detail & Related papers (2021-04-14T03:24:19Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z) - Leveraging Frequency Analysis for Deep Fake Image Recognition [35.1862941141084]
Deep neural networks can generate images that are astonishingly realistic, so much so that it is often hard for humans to distinguish them from actual photos.
These achievements have been largely made possible by Generative Adversarial Networks (GANs)
In this paper, we show that in frequency space, GAN-generated images exhibit severe artifacts that can be easily identified.
arXiv Detail & Related papers (2020-03-19T11:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.