FakePolisher: Making DeepFakes More Detection-Evasive by Shallow
Reconstruction
- URL: http://arxiv.org/abs/2006.07533v3
- Date: Mon, 17 Aug 2020 07:27:28 GMT
- Title: FakePolisher: Making DeepFakes More Detection-Evasive by Shallow
Reconstruction
- Authors: Yihao Huang, Felix Juefei-Xu, Run Wang, Qing Guo, Lei Ma, Xiaofei Xie,
Jianwen Li, Weikai Miao, Yang Liu, Geguang Pu
- Abstract summary: GAN-based image generation methods are still imperfect, whose upsampling design has limitations in leaving certain artifact patterns in the synthesized image.
In this paper, we devise a simple yet powerful approach termed FakePolisher that performs shallow reconstruction of fake images through a learned linear dictionary.
The comprehensive evaluation on 3 state-of-the-art DeepFake detection methods and fake images generated by 16 popular GAN-based fake image generation techniques, demonstrates the effectiveness of our technique.
- Score: 30.59382916497875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: At this moment, GAN-based image generation methods are still imperfect, whose
upsampling design has limitations in leaving some certain artifact patterns in
the synthesized image. Such artifact patterns can be easily exploited (by
recent methods) for difference detection of real and GAN-synthesized images.
However, the existing detection methods put much emphasis on the artifact
patterns, which can become futile if such artifact patterns were reduced.
Towards reducing the artifacts in the synthesized images, in this paper, we
devise a simple yet powerful approach termed FakePolisher that performs shallow
reconstruction of fake images through a learned linear dictionary, intending to
effectively and efficiently reduce the artifacts introduced during image
synthesis. The comprehensive evaluation on 3 state-of-the-art DeepFake
detection methods and fake images generated by 16 popular GAN-based fake image
generation techniques, demonstrates the effectiveness of our technique.Overall,
through reducing artifact patterns, our technique significantly reduces the
accuracy of the 3 state-of-the-art fake image detection methods, i.e., 47% on
average and up to 93% in the worst case.
Related papers
- Self-Adaptive Reality-Guided Diffusion for Artifact-Free Super-Resolution [47.29558685384506]
Artifact-free super-resolution (SR) aims to translate low-resolution images into their high-resolution counterparts with a strict integrity of the original content.
Traditional diffusion-based SR techniques are prone to artifact introduction during iterative procedures.
We propose Self-Adaptive Reality-Guided Diffusion to identify and mitigate the propagation of artifacts.
arXiv Detail & Related papers (2024-03-25T11:29:19Z) - A Single Simple Patch is All You Need for AI-generated Image Detection [19.541645669791023]
We find that generative models tend to focus on generating the patches with rich textures to make the images more realistic.
In this paper, we propose to exploit the noise pattern of a single simple patch to identify fake images.
Our method can achieve state-of-the-art performance on public benchmarks.
arXiv Detail & Related papers (2024-02-02T03:50:45Z) - Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Towards A Robust Deepfake Detector:Common Artifact Deepfake Detection
Model [14.308886041268973]
We propose a novel deepfake detection method named Common Artifact Deepfake Detection Model.
We find that the main obstacle to learning common artifact features is that models are easily misled by the identity representation feature.
Our method effectively reduces the influence of Implicit Identity Leakage and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2022-10-26T04:02:29Z) - Misleading Deep-Fake Detection with GAN Fingerprints [14.459389888856412]
We show that an adversary can remove indicative artifacts, the GAN fingerprint, directly from the frequency spectrum of a generated image.
Our results show that an adversary can often remove GAN fingerprints and thus evade the detection of generated images.
arXiv Detail & Related papers (2022-05-25T07:32:12Z) - Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis [69.09526348527203]
Deep generative models have led to highly realistic media, known as deepfakes, that are commonly indistinguishable from real to human eyes.
We propose a novel fake detection that is designed to re-synthesize testing images and extract visual cues for detection.
We demonstrate the improved effectiveness, cross-GAN generalization, and robustness against perturbations of our approach in a variety of detection scenarios.
arXiv Detail & Related papers (2021-05-29T21:22:24Z) - Identifying Invariant Texture Violation for Robust Deepfake Detection [17.306386179823576]
We propose the Invariant Texture Learning framework, which only accesses the published dataset with low visual quality.
Our method is based on the prior that the microscopic facial texture of the source face is inevitably violated by the texture transferred from the target person.
arXiv Detail & Related papers (2020-12-19T03:02:15Z) - What makes fake images detectable? Understanding properties that
generalize [55.4211069143719]
Deep networks can still pick up on subtle artifacts in doctored images.
We seek to understand what properties of fake images make them detectable.
We show a technique to exaggerate these detectable properties.
arXiv Detail & Related papers (2020-08-24T17:50:28Z) - Fighting Deepfake by Exposing the Convolutional Traces on Images [0.0]
Mobile apps like FACEAPP make use of the most advanced Generative Adversarial Networks (GAN) to produce extreme transformations on human face photos.
This kind of media object took the name of Deepfake and raised a new challenge in the multimedia forensics field: the Deepfake detection challenge.
In this paper, a new approach aimed to extract a Deepfake fingerprint from images is proposed.
arXiv Detail & Related papers (2020-08-07T08:49:23Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.