Exploring Frequency Adversarial Attacks for Face Forgery Detection
- URL: http://arxiv.org/abs/2203.15674v1
- Date: Tue, 29 Mar 2022 15:34:13 GMT
- Title: Exploring Frequency Adversarial Attacks for Face Forgery Detection
- Authors: Shuai Jia, Chao Ma, Taiping Yao, Bangjie Yin, Shouhong Ding, Xiaokang
Yang
- Abstract summary: We propose a frequency adversarial attack method against face forgery detectors.
Inspired by the idea of meta-learning, we also propose a hybrid adversarial attack that performs attacks in both the spatial and frequency domains.
- Score: 59.10415109589605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Various facial manipulation techniques have drawn serious public concerns in
morality, security, and privacy. Although existing face forgery classifiers
achieve promising performance on detecting fake images, these methods are
vulnerable to adversarial examples with injected imperceptible perturbations on
the pixels. Meanwhile, many face forgery detectors always utilize the frequency
diversity between real and fake faces as a crucial clue. In this paper, instead
of injecting adversarial perturbations into the spatial domain, we propose a
frequency adversarial attack method against face forgery detectors. Concretely,
we apply discrete cosine transform (DCT) on the input images and introduce a
fusion module to capture the salient region of adversary in the frequency
domain. Compared with existing adversarial attacks (e.g. FGSM, PGD) in the
spatial domain, our method is more imperceptible to human observers and does
not degrade the visual quality of the original images. Moreover, inspired by
the idea of meta-learning, we also propose a hybrid adversarial attack that
performs attacks in both the spatial and frequency domains. Extensive
experiments indicate that the proposed method fools not only the spatial-based
detectors but also the state-of-the-art frequency-based detectors effectively.
In addition, the proposed frequency attack enhances the transferability across
face forgery detectors as black-box attacks.
Related papers
- Vulnerabilities in AI-generated Image Detection: The Challenge of Adversarial Attacks [17.87119255294563]
We investigate the vulnerability of state-of-the-art AIGI detectors against adversarial attack under white-box and black-box settings.
We propose a new attack containing two main parts. First, inspired by the obvious difference between real images and fake images in the frequency domain, we add perturbations under the frequency domain to push the image away from its original frequency distribution.
We show that adversarial attack is truly a real threat to AIGI detectors, because FPBA can deliver successful black-box attacks across models, generators, defense methods, and even evade cross-generator detection.
arXiv Detail & Related papers (2024-07-30T14:07:17Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Attention Consistency Refined Masked Frequency Forgery Representation
for Generalizing Face Forgery Detection [96.539862328788]
Existing forgery detection methods suffer from unsatisfactory generalization ability to determine the authenticity in the unseen domain.
We propose a novel Attention Consistency Refined masked frequency forgery representation model toward generalizing face forgery detection algorithm (ACMF)
Experiment results on several public face forgery datasets demonstrate the superior performance of the proposed method compared with the state-of-the-art methods.
arXiv Detail & Related papers (2023-07-21T08:58:49Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - Misleading Deep-Fake Detection with GAN Fingerprints [14.459389888856412]
We show that an adversary can remove indicative artifacts, the GAN fingerprint, directly from the frequency spectrum of a generated image.
Our results show that an adversary can often remove GAN fingerprints and thus evade the detection of generated images.
arXiv Detail & Related papers (2022-05-25T07:32:12Z) - Perception Matters: Exploring Imperceptible and Transferable
Anti-forensics for GAN-generated Fake Face Imagery Detection [28.620523463372177]
generative adversarial networks (GANs) can generate photo-realistic fake facial images which are perceptually indistinguishable from real face photos.
Here we explore more textitimperceptible and textittransferable anti-forensic for fake face imagery detection based on adversarial attacks.
We propose a novel adversarial attack method, better suitable for image anti-forensics, in the transformed color domain by considering visual perception.
arXiv Detail & Related papers (2020-10-29T18:54:06Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.