Exploring Frequency Adversarial Attacks for Face Forgery Detection
- URL: http://arxiv.org/abs/2203.15674v1
- Date: Tue, 29 Mar 2022 15:34:13 GMT
- Title: Exploring Frequency Adversarial Attacks for Face Forgery Detection
- Authors: Shuai Jia, Chao Ma, Taiping Yao, Bangjie Yin, Shouhong Ding, Xiaokang
Yang
- Abstract summary: We propose a frequency adversarial attack method against face forgery detectors.
Inspired by the idea of meta-learning, we also propose a hybrid adversarial attack that performs attacks in both the spatial and frequency domains.
- Score: 59.10415109589605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Various facial manipulation techniques have drawn serious public concerns in
morality, security, and privacy. Although existing face forgery classifiers
achieve promising performance on detecting fake images, these methods are
vulnerable to adversarial examples with injected imperceptible perturbations on
the pixels. Meanwhile, many face forgery detectors always utilize the frequency
diversity between real and fake faces as a crucial clue. In this paper, instead
of injecting adversarial perturbations into the spatial domain, we propose a
frequency adversarial attack method against face forgery detectors. Concretely,
we apply discrete cosine transform (DCT) on the input images and introduce a
fusion module to capture the salient region of adversary in the frequency
domain. Compared with existing adversarial attacks (e.g. FGSM, PGD) in the
spatial domain, our method is more imperceptible to human observers and does
not degrade the visual quality of the original images. Moreover, inspired by
the idea of meta-learning, we also propose a hybrid adversarial attack that
performs attacks in both the spatial and frequency domains. Extensive
experiments indicate that the proposed method fools not only the spatial-based
detectors but also the state-of-the-art frequency-based detectors effectively.
In addition, the proposed frequency attack enhances the transferability across
face forgery detectors as black-box attacks.
Related papers
- Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Attention Consistency Refined Masked Frequency Forgery Representation
for Generalizing Face Forgery Detection [96.539862328788]
Existing forgery detection methods suffer from unsatisfactory generalization ability to determine the authenticity in the unseen domain.
We propose a novel Attention Consistency Refined masked frequency forgery representation model toward generalizing face forgery detection algorithm (ACMF)
Experiment results on several public face forgery datasets demonstrate the superior performance of the proposed method compared with the state-of-the-art methods.
arXiv Detail & Related papers (2023-07-21T08:58:49Z) - Detecting Adversarial Faces Using Only Real Face Self-Perturbations [36.26178169550577]
Adrial attacks aim to disturb the functionality of a target system by adding specific noise to the input samples.
Existing defense techniques achieve high accuracy in detecting some specific adversarial faces (adv-faces)
New attack methods especially GAN-based attacks with completely different noise patterns circumvent them and reach a higher attack success rate.
arXiv Detail & Related papers (2023-04-22T09:55:48Z) - Misleading Deep-Fake Detection with GAN Fingerprints [14.459389888856412]
We show that an adversary can remove indicative artifacts, the GAN fingerprint, directly from the frequency spectrum of a generated image.
Our results show that an adversary can often remove GAN fingerprints and thus evade the detection of generated images.
arXiv Detail & Related papers (2022-05-25T07:32:12Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Perception Matters: Exploring Imperceptible and Transferable
Anti-forensics for GAN-generated Fake Face Imagery Detection [28.620523463372177]
generative adversarial networks (GANs) can generate photo-realistic fake facial images which are perceptually indistinguishable from real face photos.
Here we explore more textitimperceptible and textittransferable anti-forensic for fake face imagery detection based on adversarial attacks.
We propose a novel adversarial attack method, better suitable for image anti-forensics, in the transformed color domain by considering visual perception.
arXiv Detail & Related papers (2020-10-29T18:54:06Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.