Imperceptible Adversarial Examples for Fake Image Detection
- URL: http://arxiv.org/abs/2106.01615v1
- Date: Thu, 3 Jun 2021 06:25:04 GMT
- Title: Imperceptible Adversarial Examples for Fake Image Detection
- Authors: Quanyu Liao, Yuezun Li, Xin Wang, Bin Kong, Bin Zhu, Siwei Lyu,
Youbing Yin, Qi Song, Xi Wu
- Abstract summary: We propose a novel method to disrupt the fake image detection by determining key pixels to a fake image detector and attacking only the key pixels.
Experiments on two public datasets with three fake image detectors indicate that our proposed method achieves state-of-the-art performance in both white-box and black-box attacks.
- Score: 46.72602615209758
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fooling people with highly realistic fake images generated with Deepfake or
GANs brings a great social disturbance to our society. Many methods have been
proposed to detect fake images, but they are vulnerable to adversarial
perturbations -- intentionally designed noises that can lead to the wrong
prediction. Existing methods of attacking fake image detectors usually generate
adversarial perturbations to perturb almost the entire image. This is redundant
and increases the perceptibility of perturbations. In this paper, we propose a
novel method to disrupt the fake image detection by determining key pixels to a
fake image detector and attacking only the key pixels, which results in the
$L_0$ and the $L_2$ norms of adversarial perturbations much less than those of
existing works. Experiments on two public datasets with three fake image
detectors indicate that our proposed method achieves state-of-the-art
performance in both white-box and black-box attacks.
Related papers
- Adversarial Magnification to Deceive Deepfake Detection through Super Resolution [9.372782789857803]
This paper explores the application of super resolution techniques as a possible adversarial attack in deepfake detection.
We demonstrate that minimal changes made by these methods in the visual appearance of images can have a profound impact on the performance of deepfake detection systems.
We propose a novel attack using super resolution as a quick, black-box and effective method to camouflage fake images and/or generate false alarms on pristine images.
arXiv Detail & Related papers (2024-07-02T21:17:36Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake
Detection [58.1263969438364]
We propose adversarial head turn (AdvHeat) as the first attempt at 3D adversarial face views against deepfake detectors.
Experiments validate the vulnerability of various detectors to AdvHeat in realistic, black-box scenarios.
Additional analyses demonstrate that AdvHeat is better than conventional attacks on both the cross-detector transferability and robustness to defenses.
arXiv Detail & Related papers (2023-09-03T07:01:34Z) - Black-Box Attack against GAN-Generated Image Detector with Contrastive
Perturbation [0.4297070083645049]
We propose a new black-box attack method against GAN-generated image detectors.
A novel contrastive learning strategy is adopted to train the encoder-decoder network based anti-forensic model.
The proposed attack effectively reduces the accuracy of three state-of-the-art detectors on six popular GANs.
arXiv Detail & Related papers (2022-11-07T12:56:14Z) - Exploring Frequency Adversarial Attacks for Face Forgery Detection [59.10415109589605]
We propose a frequency adversarial attack method against face forgery detectors.
Inspired by the idea of meta-learning, we also propose a hybrid adversarial attack that performs attacks in both the spatial and frequency domains.
arXiv Detail & Related papers (2022-03-29T15:34:13Z) - Adversarial Examples Detection beyond Image Space [88.7651422751216]
We find that there exists compliance between perturbations and prediction confidence, which guides us to detect few-perturbation attacks from the aspect of prediction confidence.
We propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.
arXiv Detail & Related papers (2021-02-23T09:55:03Z) - Exploring Adversarial Fake Images on Face Manifold [5.26916168336451]
Images synthesized by powerful generative adversarial network (GAN) based methods have drawn moral and privacy concerns.
In this paper, instead of adding adversarial noise, we optimally search adversarial points on face manifold to generate anti-forensic fake face images.
arXiv Detail & Related papers (2021-01-09T02:08:59Z) - Perception Matters: Exploring Imperceptible and Transferable
Anti-forensics for GAN-generated Fake Face Imagery Detection [28.620523463372177]
generative adversarial networks (GANs) can generate photo-realistic fake facial images which are perceptually indistinguishable from real face photos.
Here we explore more textitimperceptible and textittransferable anti-forensic for fake face imagery detection based on adversarial attacks.
We propose a novel adversarial attack method, better suitable for image anti-forensics, in the transformed color domain by considering visual perception.
arXiv Detail & Related papers (2020-10-29T18:54:06Z) - What makes fake images detectable? Understanding properties that
generalize [55.4211069143719]
Deep networks can still pick up on subtle artifacts in doctored images.
We seek to understand what properties of fake images make them detectable.
We show a technique to exaggerate these detectable properties.
arXiv Detail & Related papers (2020-08-24T17:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.