AVA: Inconspicuous Attribute Variation-based Adversarial Attack
bypassing DeepFake Detection
- URL: http://arxiv.org/abs/2312.08675v1
- Date: Thu, 14 Dec 2023 06:25:56 GMT
- Title: AVA: Inconspicuous Attribute Variation-based Adversarial Attack
bypassing DeepFake Detection
- Authors: Xiangtao Meng, Li Wang, Shanqing Guo, Lei Ju, Qingchuan Zhao
- Abstract summary: DeepFake applications are becoming popular in recent years, but their abuses pose a serious privacy threat.
Most related detection algorithms to mitigate the abuse issues are inherently vulnerable to adversarial attacks.
We have identified a new attribute-variation-based adversarial attack (AVA) that perturbs the latent space via a combination of Gaussian prior and semantic discriminator.
- Score: 9.40828913999459
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While DeepFake applications are becoming popular in recent years, their
abuses pose a serious privacy threat. Unfortunately, most related detection
algorithms to mitigate the abuse issues are inherently vulnerable to
adversarial attacks because they are built atop DNN-based classification
models, and the literature has demonstrated that they could be bypassed by
introducing pixel-level perturbations. Though corresponding mitigation has been
proposed, we have identified a new attribute-variation-based adversarial attack
(AVA) that perturbs the latent space via a combination of Gaussian prior and
semantic discriminator to bypass such mitigation. It perturbs the semantics in
the attribute space of DeepFake images, which are inconspicuous to human beings
(e.g., mouth open) but can result in substantial differences in DeepFake
detection. We evaluate our proposed AVA attack on nine state-of-the-art
DeepFake detection algorithms and applications. The empirical results
demonstrate that AVA attack defeats the state-of-the-art black box attacks
against DeepFake detectors and achieves more than a 95% success rate on two
commercial DeepFake detectors. Moreover, our human study indicates that
AVA-generated DeepFake images are often imperceptible to humans, which presents
huge security and privacy concerns.
Related papers
- Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Real is not True: Backdoor Attacks Against Deepfake Detection [9.572726483706846]
We introduce a pioneering paradigm denominated as Bad-Deepfake, which represents a novel foray into the realm of backdoor attacks levied against deepfake detectors.
Our approach hinges upon the strategic manipulation of a subset of the training data, enabling us to wield disproportionate influence over the operational characteristics of a trained model.
arXiv Detail & Related papers (2024-03-11T10:57:14Z) - GazeForensics: DeepFake Detection via Gaze-guided Spatial Inconsistency
Learning [63.547321642941974]
We introduce GazeForensics, an innovative DeepFake detection method that utilizes gaze representation obtained from a 3D gaze estimation model.
Experiment results reveal that our proposed GazeForensics outperforms the current state-of-the-art methods.
arXiv Detail & Related papers (2023-11-13T04:48:33Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - On the Vulnerability of DeepFake Detectors to Attacks Generated by
Denoising Diffusion Models [0.5827521884806072]
We investigate the vulnerability of single-image deepfake detectors to black-box attacks created by the newest generation of generative methods.
Our experiments are run on FaceForensics++, a widely used deepfake benchmark consisting of manipulated images.
Our findings indicate that employing just a single denoising diffusion step in the reconstruction process of a deepfake can significantly reduce the likelihood of detection.
arXiv Detail & Related papers (2023-07-11T15:57:51Z) - Towards an Accurate and Secure Detector against Adversarial
Perturbations [58.02078078305753]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition of natural-artificial data.
We propose an accurate and secure adversarial example detector, relying on a spatial-frequency discriminative decomposition with secret keys.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - Detecting Adversarial Faces Using Only Real Face Self-Perturbations [36.26178169550577]
Adrial attacks aim to disturb the functionality of a target system by adding specific noise to the input samples.
Existing defense techniques achieve high accuracy in detecting some specific adversarial faces (adv-faces)
New attack methods especially GAN-based attacks with completely different noise patterns circumvent them and reach a higher attack success rate.
arXiv Detail & Related papers (2023-04-22T09:55:48Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Making DeepFakes more spurious: evading deep face forgery detection via
trace removal attack [16.221725939480084]
We present a detector-agnostic trace removal attack for DeepFake anti-forensics.
Instead of investigating the detector side, our attack looks into the original DeepFake creation pipeline.
Experiments show that the proposed attack can significantly compromise the detection accuracy of six state-of-the-art DeepFake detectors.
arXiv Detail & Related papers (2022-03-22T03:13:33Z) - Adversarial Threats to DeepFake Detection: A Practical Perspective [12.611342984880826]
We study the vulnerabilities of state-of-the-art DeepFake detection methods from a practical stand point.
We create more accessible attacks using Universal Adversarial Perturbations which pose a very feasible attack scenario.
arXiv Detail & Related papers (2020-11-19T16:53:38Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.