Making GAN-Generated Images Difficult To Spot: A New Attack Against
Synthetic Image Detectors
- URL: http://arxiv.org/abs/2104.12069v1
- Date: Sun, 25 Apr 2021 05:56:57 GMT
- Title: Making GAN-Generated Images Difficult To Spot: A New Attack Against
Synthetic Image Detectors
- Authors: Xinwei Zhao, Matthew C. Stamm
- Abstract summary: We propose a new anti-forensic attack capable of fooling GAN-generated image detectors.
Our attack uses an adversarially trained generator to synthesize traces that these detectors associate with real images.
We show that our attack can fool eight state-of-the-art detection CNNs with synthetic images created using seven different GANs.
- Score: 24.809185168969066
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Visually realistic GAN-generated images have recently emerged as an important
misinformation threat. Research has shown that these synthetic images contain
forensic traces that are readily identifiable by forensic detectors.
Unfortunately, these detectors are built upon neural networks, which are
vulnerable to recently developed adversarial attacks. In this paper, we propose
a new anti-forensic attack capable of fooling GAN-generated image detectors.
Our attack uses an adversarially trained generator to synthesize traces that
these detectors associate with real images. Furthermore, we propose a technique
to train our attack so that it can achieve transferability, i.e. it can fool
unknown CNNs that it was not explicitly trained against. We demonstrate the
performance of our attack through an extensive set of experiments, where we
show that our attack can fool eight state-of-the-art detection CNNs with
synthetic images created using seven different GANs.
Related papers
- Fake It Until You Break It: On the Adversarial Robustness of AI-generated Image Detectors [14.284639462471274]
We evaluate state-of-the-art AI-generated image (AIGI) detectors under different attack scenarios.
Attacks can significantly reduce detection accuracy to the extent that the risks of relying on detectors outweigh their benefits.
We propose a simple defense mechanism to make CLIP-based detectors, which are currently the best-performing detectors, robust against these attacks.
arXiv Detail & Related papers (2024-10-02T14:11:29Z) - Vulnerabilities in AI-generated Image Detection: The Challenge of Adversarial Attacks [17.87119255294563]
We investigate the vulnerability of state-of-the-art AIGI detectors against adversarial attack under white-box and black-box settings.
We propose a new attack containing two main parts. First, inspired by the obvious difference between real images and fake images in the frequency domain, we add perturbations under the frequency domain to push the image away from its original frequency distribution.
We show that adversarial attack is truly a real threat to AIGI detectors, because FPBA can deliver successful black-box attacks across models, generators, defense methods, and even evade cross-generator detection.
arXiv Detail & Related papers (2024-07-30T14:07:17Z) - Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake
Detection [58.1263969438364]
We propose adversarial head turn (AdvHeat) as the first attempt at 3D adversarial face views against deepfake detectors.
Experiments validate the vulnerability of various detectors to AdvHeat in realistic, black-box scenarios.
Additional analyses demonstrate that AdvHeat is better than conventional attacks on both the cross-detector transferability and robustness to defenses.
arXiv Detail & Related papers (2023-09-03T07:01:34Z) - Attacking Image Splicing Detection and Localization Algorithms Using
Synthetic Traces [17.408491376238008]
Recent advances in deep learning have enabled forensics researchers to develop a new class of image splicing detection and localization algorithms.
These algorithms identify spliced content by detecting localized inconsistencies in forensic traces using Siamese neural networks.
In this paper, we propose a new GAN-based anti-forensic attack that is able to fool state-of-the-art splicing detection and localization algorithms.
arXiv Detail & Related papers (2022-11-22T15:07:16Z) - Black-Box Attack against GAN-Generated Image Detector with Contrastive
Perturbation [0.4297070083645049]
We propose a new black-box attack method against GAN-generated image detectors.
A novel contrastive learning strategy is adopted to train the encoder-decoder network based anti-forensic model.
The proposed attack effectively reduces the accuracy of three state-of-the-art detectors on six popular GANs.
arXiv Detail & Related papers (2022-11-07T12:56:14Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Context-Aware Transfer Attacks for Object Detection [51.65308857232767]
We present a new approach to generate context-aware attacks for object detectors.
We show that by using co-occurrence of objects and their relative locations and sizes as context information, we can successfully generate targeted mis-categorization attacks.
arXiv Detail & Related papers (2021-12-06T18:26:39Z) - Real-World Adversarial Examples involving Makeup Application [58.731070632586594]
We propose a physical adversarial attack with the use of full-face makeup.
Our attack can effectively overcome manual errors in makeup application, such as color and position-related errors.
arXiv Detail & Related papers (2021-09-04T05:29:28Z) - A Transferable Anti-Forensic Attack on Forensic CNNs Using A Generative
Adversarial Network [24.032025811564814]
convolutional neural networks (CNNs) have become widely used in multimedia forensics.
Anti-forensic attacks have been developed to fool these CNN-based forensic algorithms.
We propose a new anti-forensic attack framework designed to remove forensic traces left by a variety of manipulation operations.
arXiv Detail & Related papers (2021-01-23T19:31:59Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.