Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake
Detection
- URL: http://arxiv.org/abs/2309.01104v1
- Date: Sun, 3 Sep 2023 07:01:34 GMT
- Title: Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake
Detection
- Authors: Weijie Wang, Zhengyu Zhao, Nicu Sebe, Bruno Lepri
- Abstract summary: We propose adversarial head turn (AdvHeat) as the first attempt at 3D adversarial face views against deepfake detectors.
Experiments validate the vulnerability of various detectors to AdvHeat in realistic, black-box scenarios.
Additional analyses demonstrate that AdvHeat is better than conventional attacks on both the cross-detector transferability and robustness to defenses.
- Score: 58.1263969438364
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Malicious use of deepfakes leads to serious public concerns and reduces
people's trust in digital media. Although effective deepfake detectors have
been proposed, they are substantially vulnerable to adversarial attacks. To
evaluate the detector's robustness, recent studies have explored various
attacks. However, all existing attacks are limited to 2D image perturbations,
which are hard to translate into real-world facial changes. In this paper, we
propose adversarial head turn (AdvHeat), the first attempt at 3D adversarial
face views against deepfake detectors, based on face view synthesis from a
single-view fake image. Extensive experiments validate the vulnerability of
various detectors to AdvHeat in realistic, black-box scenarios. For example,
AdvHeat based on a simple random search yields a high attack success rate of
96.8% with 360 searching steps. When additional query access is allowed, we can
further reduce the step budget to 50. Additional analyses demonstrate that
AdvHeat is better than conventional attacks on both the cross-detector
transferability and robustness to defenses. The adversarial images generated by
AdvHeat are also shown to have natural looks. Our code, including that for
generating a multi-view dataset consisting of 360 synthetic views for each of
1000 IDs from FaceForensics++, is available at
https://github.com/twowwj/AdvHeaT.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Making DeepFakes more spurious: evading deep face forgery detection via
trace removal attack [16.221725939480084]
We present a detector-agnostic trace removal attack for DeepFake anti-forensics.
Instead of investigating the detector side, our attack looks into the original DeepFake creation pipeline.
Experiments show that the proposed attack can significantly compromise the detection accuracy of six state-of-the-art DeepFake detectors.
arXiv Detail & Related papers (2022-03-22T03:13:33Z) - Understanding the Security of Deepfake Detection [23.118012417901078]
We study the security of state-of-the-art deepfake detection methods in adversarial settings.
We use two large-scale public deepfakes data sources including FaceForensics++ and Facebook Deepfake Detection Challenge.
Our results uncover multiple security limitations of the deepfake detection methods in adversarial settings.
arXiv Detail & Related papers (2021-07-05T14:18:21Z) - Imperceptible Adversarial Examples for Fake Image Detection [46.72602615209758]
We propose a novel method to disrupt the fake image detection by determining key pixels to a fake image detector and attacking only the key pixels.
Experiments on two public datasets with three fake image detectors indicate that our proposed method achieves state-of-the-art performance in both white-box and black-box attacks.
arXiv Detail & Related papers (2021-06-03T06:25:04Z) - Adversarial Threats to DeepFake Detection: A Practical Perspective [12.611342984880826]
We study the vulnerabilities of state-of-the-art DeepFake detection methods from a practical stand point.
We create more accessible attacks using Universal Adversarial Perturbations which pose a very feasible attack scenario.
arXiv Detail & Related papers (2020-11-19T16:53:38Z) - Perception Matters: Exploring Imperceptible and Transferable
Anti-forensics for GAN-generated Fake Face Imagery Detection [28.620523463372177]
generative adversarial networks (GANs) can generate photo-realistic fake facial images which are perceptually indistinguishable from real face photos.
Here we explore more textitimperceptible and textittransferable anti-forensic for fake face imagery detection based on adversarial attacks.
We propose a novel adversarial attack method, better suitable for image anti-forensics, in the transformed color domain by considering visual perception.
arXiv Detail & Related papers (2020-10-29T18:54:06Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.