MagDR: Mask-guided Detection and Reconstruction for Defending Deepfakes
- URL: http://arxiv.org/abs/2103.14211v1
- Date: Fri, 26 Mar 2021 01:57:04 GMT
- Title: MagDR: Mask-guided Detection and Reconstruction for Defending Deepfakes
- Authors: Zhikai Chen and Lingxi Xie and Shanmin Pang and Yong He and Bo Zhang
- Abstract summary: MagDR is a mask-guided detection and reconstruction pipeline for defending deepfakes from adversarial attacks.
In experiments, MagDR defends three main tasks of deepfakes, and the learned reconstruction pipeline transfers across input data, showing promising performance.
- Score: 46.07140326726742
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deepfakes raised serious concerns on the authenticity of visual contents.
Prior works revealed the possibility to disrupt deepfakes by adding adversarial
perturbations to the source data, but we argue that the threat has not been
eliminated yet. This paper presents MagDR, a mask-guided detection and
reconstruction pipeline for defending deepfakes from adversarial attacks. MagDR
starts with a detection module that defines a few criteria to judge the
abnormality of the output of deepfakes, and then uses it to guide a learnable
reconstruction procedure. Adaptive masks are extracted to capture the change in
local facial regions. In experiments, MagDR defends three main tasks of
deepfakes, and the learned reconstruction pipeline transfers across input data,
showing promising performance in defending both black-box and white-box
attacks.
Related papers
- DiffusionFake: Enhancing Generalization in Deepfake Detection via Guided Stable Diffusion [94.46904504076124]
Deepfake technology has made face swapping highly realistic, raising concerns about the malicious use of fabricated facial content.
Existing methods often struggle to generalize to unseen domains due to the diverse nature of facial manipulations.
We introduce DiffusionFake, a novel framework that reverses the generative process of face forgeries to enhance the generalization of detection models.
arXiv Detail & Related papers (2024-10-06T06:22:43Z) - Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake
Detection [58.1263969438364]
We propose adversarial head turn (AdvHeat) as the first attempt at 3D adversarial face views against deepfake detectors.
Experiments validate the vulnerability of various detectors to AdvHeat in realistic, black-box scenarios.
Additional analyses demonstrate that AdvHeat is better than conventional attacks on both the cross-detector transferability and robustness to defenses.
arXiv Detail & Related papers (2023-09-03T07:01:34Z) - On the Vulnerability of DeepFake Detectors to Attacks Generated by
Denoising Diffusion Models [0.5827521884806072]
We investigate the vulnerability of single-image deepfake detectors to black-box attacks created by the newest generation of generative methods.
Our experiments are run on FaceForensics++, a widely used deepfake benchmark consisting of manipulated images.
Our findings indicate that employing just a single denoising diffusion step in the reconstruction process of a deepfake can significantly reduce the likelihood of detection.
arXiv Detail & Related papers (2023-07-11T15:57:51Z) - Mover: Mask and Recovery based Facial Part Consistency Aware Method for
Deepfake Video Detection [33.29744034340998]
Mover is a new Deepfake detection model that exploits unspecific facial part inconsistencies.
We propose a novel model with dual networks that utilize the pretrained encoder and masked autoencoder.
Our experiments on standard benchmarks demonstrate that Mover is highly effective.
arXiv Detail & Related papers (2023-03-03T06:57:22Z) - Making DeepFakes more spurious: evading deep face forgery detection via
trace removal attack [16.221725939480084]
We present a detector-agnostic trace removal attack for DeepFake anti-forensics.
Instead of investigating the detector side, our attack looks into the original DeepFake creation pipeline.
Experiments show that the proposed attack can significantly compromise the detection accuracy of six state-of-the-art DeepFake detectors.
arXiv Detail & Related papers (2022-03-22T03:13:33Z) - Self-supervised Transformer for Deepfake Detection [112.81127845409002]
Deepfake techniques in real-world scenarios require stronger generalization abilities of face forgery detectors.
Inspired by transfer learning, neural networks pre-trained on other large-scale face-related tasks may provide useful features for deepfake detection.
In this paper, we propose a self-supervised transformer based audio-visual contrastive learning method.
arXiv Detail & Related papers (2022-03-02T17:44:40Z) - Deepfake Detection for Facial Images with Facemasks [17.238556058316412]
We thoroughly evaluate the performance of state-of-the-art deepfake detection models on the deepfakes withthe facemask.
We propose two approaches to enhance themasked deepfakes detection:face-patchandface-crop.
arXiv Detail & Related papers (2022-02-23T09:01:27Z) - Understanding the Security of Deepfake Detection [23.118012417901078]
We study the security of state-of-the-art deepfake detection methods in adversarial settings.
We use two large-scale public deepfakes data sources including FaceForensics++ and Facebook Deepfake Detection Challenge.
Our results uncover multiple security limitations of the deepfake detection methods in adversarial settings.
arXiv Detail & Related papers (2021-07-05T14:18:21Z) - WildDeepfake: A Challenging Real-World Dataset for Deepfake Detection [82.42495493102805]
We introduce a new dataset WildDeepfake which consists of 7,314 face sequences extracted from 707 deepfake videos collected completely from the internet.
We conduct a systematic evaluation of a set of baseline detection networks on both existing and our WildDeepfake datasets, and show that WildDeepfake is indeed a more challenging dataset, where the detection performance can decrease drastically.
arXiv Detail & Related papers (2021-01-05T11:10:32Z) - Deep Spatial Gradient and Temporal Depth Learning for Face Anti-spoofing [61.82466976737915]
Depth supervised learning has been proven as one of the most effective methods for face anti-spoofing.
We propose a new approach to detect presentation attacks from multiple frames based on two insights.
The proposed approach achieves state-of-the-art results on five benchmark datasets.
arXiv Detail & Related papers (2020-03-18T06:11:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.