Making DeepFakes more spurious: evading deep face forgery detection via
trace removal attack
- URL: http://arxiv.org/abs/2203.11433v1
- Date: Tue, 22 Mar 2022 03:13:33 GMT
- Title: Making DeepFakes more spurious: evading deep face forgery detection via
trace removal attack
- Authors: Chi Liu, Huajie Chen, Tianqing Zhu, Jun Zhang, Wanlei Zhou
- Abstract summary: We present a detector-agnostic trace removal attack for DeepFake anti-forensics.
Instead of investigating the detector side, our attack looks into the original DeepFake creation pipeline.
Experiments show that the proposed attack can significantly compromise the detection accuracy of six state-of-the-art DeepFake detectors.
- Score: 16.221725939480084
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: DeepFakes are raising significant social concerns. Although various DeepFake
detectors have been developed as forensic countermeasures, these detectors are
still vulnerable to attacks. Recently, a few attacks, principally adversarial
attacks, have succeeded in cloaking DeepFake images to evade detection.
However, these attacks have typical detector-specific designs, which require
prior knowledge about the detector, leading to poor transferability. Moreover,
these attacks only consider simple security scenarios. Less is known about how
effective they are in high-level scenarios where either the detectors or the
attacker's knowledge varies. In this paper, we solve the above challenges with
presenting a novel detector-agnostic trace removal attack for DeepFake
anti-forensics. Instead of investigating the detector side, our attack looks
into the original DeepFake creation pipeline, attempting to remove all
detectable natural DeepFake traces to render the fake images more "authentic".
To implement this attack, first, we perform a DeepFake trace discovery,
identifying three discernible traces. Then a trace removal network (TR-Net) is
proposed based on an adversarial learning framework involving one generator and
multiple discriminators. Each discriminator is responsible for one individual
trace representation to avoid cross-trace interference. These discriminators
are arranged in parallel, which prompts the generator to remove various traces
simultaneously. To evaluate the attack efficacy, we crafted heterogeneous
security scenarios where the detectors were embedded with different levels of
defense and the attackers' background knowledge of data varies. The
experimental results show that the proposed attack can significantly compromise
the detection accuracy of six state-of-the-art DeepFake detectors while causing
only a negligible loss in visual quality to the original DeepFake samples.
Related papers
- Real is not True: Backdoor Attacks Against Deepfake Detection [9.572726483706846]
We introduce a pioneering paradigm denominated as Bad-Deepfake, which represents a novel foray into the realm of backdoor attacks levied against deepfake detectors.
Our approach hinges upon the strategic manipulation of a subset of the training data, enabling us to wield disproportionate influence over the operational characteristics of a trained model.
arXiv Detail & Related papers (2024-03-11T10:57:14Z) - Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery
Detection [62.595450266262645]
This paper introduces a novel and previously unrecognized threat in face forgery detection scenarios caused by backdoor attack.
By embedding backdoors into models, attackers can deceive detectors into producing erroneous predictions for forged faces.
We propose emphPoisoned Forgery Face framework, which enables clean-label backdoor attacks on face forgery detectors.
arXiv Detail & Related papers (2024-02-18T06:31:05Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake
Detection [58.1263969438364]
We propose adversarial head turn (AdvHeat) as the first attempt at 3D adversarial face views against deepfake detectors.
Experiments validate the vulnerability of various detectors to AdvHeat in realistic, black-box scenarios.
Additional analyses demonstrate that AdvHeat is better than conventional attacks on both the cross-detector transferability and robustness to defenses.
arXiv Detail & Related papers (2023-09-03T07:01:34Z) - How Generalizable are Deepfake Image Detectors? An Empirical Study [4.42204674141385]
We present the first empirical study on the generalizability of deepfake detectors.
Our study utilizes six deepfake datasets, five deepfake image detection methods, and two model augmentation approaches.
We find that detectors are learning unwanted properties specific to synthesis methods and struggling to extract discriminative features.
arXiv Detail & Related papers (2023-08-08T10:30:34Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - Understanding the Security of Deepfake Detection [23.118012417901078]
We study the security of state-of-the-art deepfake detection methods in adversarial settings.
We use two large-scale public deepfakes data sources including FaceForensics++ and Facebook Deepfake Detection Challenge.
Our results uncover multiple security limitations of the deepfake detection methods in adversarial settings.
arXiv Detail & Related papers (2021-07-05T14:18:21Z) - We Can Always Catch You: Detecting Adversarial Patched Objects WITH or
WITHOUT Signature [3.5272597442284104]
In this paper, we explore the detection problems about the adversarial patch attacks to the object detection.
A fast signature-based defense method is proposed and demonstrated to be effective.
The newly generated adversarial patches can successfully evade the proposed signature-based defense.
We present a novel signature-independent detection method based on the internal content semantics consistency.
arXiv Detail & Related papers (2021-06-09T17:58:08Z) - Identity-Driven DeepFake Detection [91.0504621868628]
Identity-Driven DeepFake Detection takes as input the suspect image/video as well as the target identity information.
We output a decision on whether the identity in the suspect image/video is the same as the target identity.
We present a simple identity-based detection algorithm called the OuterFace, which may serve as a baseline for further research.
arXiv Detail & Related papers (2020-12-07T18:59:08Z) - Adversarial Threats to DeepFake Detection: A Practical Perspective [12.611342984880826]
We study the vulnerabilities of state-of-the-art DeepFake detection methods from a practical stand point.
We create more accessible attacks using Universal Adversarial Perturbations which pose a very feasible attack scenario.
arXiv Detail & Related papers (2020-11-19T16:53:38Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.