Understanding the Security of Deepfake Detection
- URL: http://arxiv.org/abs/2107.02045v2
- Date: Wed, 7 Jul 2021 13:04:14 GMT
- Title: Understanding the Security of Deepfake Detection
- Authors: Xiaoyu Cao and Neil Zhenqiang Gong
- Abstract summary: We study the security of state-of-the-art deepfake detection methods in adversarial settings.
We use two large-scale public deepfakes data sources including FaceForensics++ and Facebook Deepfake Detection Challenge.
Our results uncover multiple security limitations of the deepfake detection methods in adversarial settings.
- Score: 23.118012417901078
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deepfakes pose growing challenges to the trust of information on the
Internet. Thus, detecting deepfakes has attracted increasing attentions from
both academia and industry. State-of-the-art deepfake detection methods consist
of two key components, i.e., face extractor and face classifier, which extract
the face region in an image and classify it to be real/fake, respectively.
Existing studies mainly focused on improving the detection performance in
non-adversarial settings, leaving security of deepfake detection in adversarial
settings largely unexplored. In this work, we aim to bridge the gap. In
particular, we perform a systematic measurement study to understand the
security of the state-of-the-art deepfake detection methods in adversarial
settings. We use two large-scale public deepfakes data sources including
FaceForensics++ and Facebook Deepfake Detection Challenge, where the deepfakes
are fake face images; and we train state-of-the-art deepfake detection methods.
These detection methods can achieve 0.94--0.99 accuracies in non-adversarial
settings on these datasets. However, our measurement results uncover multiple
security limitations of the deepfake detection methods in adversarial settings.
First, we find that an attacker can evade a face extractor, i.e., the face
extractor fails to extract the correct face regions, via adding small Gaussian
noise to its deepfake images. Second, we find that a face classifier trained
using deepfakes generated by one method cannot detect deepfakes generated by
another method, i.e., an attacker can evade detection via generating deepfakes
using a new method. Third, we find that an attacker can leverage backdoor
attacks developed by the adversarial machine learning community to evade a face
classifier. Our results highlight that deepfake detection should consider the
adversarial nature of the problem.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - Deep Learning Technology for Face Forgery Detection: A Survey [17.519617618071003]
Deep learning has enabled the creation or manipulation of high-fidelity facial images and videos.
This technology, also known as deepfake, has achieved dramatic progress and become increasingly popular in social media.
To diminish the risks of deepfake, it is desirable to develop powerful forgery detection methods.
arXiv Detail & Related papers (2024-09-22T01:42:01Z) - Shaking the Fake: Detecting Deepfake Videos in Real Time via Active Probes [3.6308756891251392]
Real-time deepfake, a type of generative AI, is capable of "creating" non-existing contents (e.g., swapping one's face with another) in a video.
It has been misused to produce deepfake videos for malicious purposes, including financial scams and political misinformation.
We propose SFake, a new real-time deepfake detection method that exploits deepfake models' inability to adapt to physical interference.
arXiv Detail & Related papers (2024-09-17T04:58:30Z) - Adversarial Magnification to Deceive Deepfake Detection through Super Resolution [9.372782789857803]
This paper explores the application of super resolution techniques as a possible adversarial attack in deepfake detection.
We demonstrate that minimal changes made by these methods in the visual appearance of images can have a profound impact on the performance of deepfake detection systems.
We propose a novel attack using super resolution as a quick, black-box and effective method to camouflage fake images and/or generate false alarms on pristine images.
arXiv Detail & Related papers (2024-07-02T21:17:36Z) - CrossDF: Improving Cross-Domain Deepfake Detection with Deep Information Decomposition [53.860796916196634]
We propose a Deep Information Decomposition (DID) framework to enhance the performance of Cross-dataset Deepfake Detection (CrossDF)
Unlike most existing deepfake detection methods, our framework prioritizes high-level semantic features over specific visual artifacts.
It adaptively decomposes facial features into deepfake-related and irrelevant information, only using the intrinsic deepfake-related information for real/fake discrimination.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake
Detection [58.1263969438364]
We propose adversarial head turn (AdvHeat) as the first attempt at 3D adversarial face views against deepfake detectors.
Experiments validate the vulnerability of various detectors to AdvHeat in realistic, black-box scenarios.
Additional analyses demonstrate that AdvHeat is better than conventional attacks on both the cross-detector transferability and robustness to defenses.
arXiv Detail & Related papers (2023-09-03T07:01:34Z) - How Generalizable are Deepfake Image Detectors? An Empirical Study [4.42204674141385]
We present the first empirical study on the generalizability of deepfake detectors.
Our study utilizes six deepfake datasets, five deepfake image detection methods, and two model augmentation approaches.
We find that detectors are learning unwanted properties specific to synthesis methods and struggling to extract discriminative features.
arXiv Detail & Related papers (2023-08-08T10:30:34Z) - Deepfake Detection for Facial Images with Facemasks [17.238556058316412]
We thoroughly evaluate the performance of state-of-the-art deepfake detection models on the deepfakes withthe facemask.
We propose two approaches to enhance themasked deepfakes detection:face-patchandface-crop.
arXiv Detail & Related papers (2022-02-23T09:01:27Z) - Multi-attentional Deepfake Detection [79.80308897734491]
Face forgery by deepfake is widely spread over the internet and has raised severe societal concerns.
We propose a new multi-attentional deepfake detection network. Specifically, it consists of three key components: 1) multiple spatial attention heads to make the network attend to different local parts; 2) textural feature enhancement block to zoom in the subtle artifacts in shallow features; 3) aggregate the low-level textural feature and high-level semantic features guided by the attention maps.
arXiv Detail & Related papers (2021-03-03T13:56:14Z) - WildDeepfake: A Challenging Real-World Dataset for Deepfake Detection [82.42495493102805]
We introduce a new dataset WildDeepfake which consists of 7,314 face sequences extracted from 707 deepfake videos collected completely from the internet.
We conduct a systematic evaluation of a set of baseline detection networks on both existing and our WildDeepfake datasets, and show that WildDeepfake is indeed a more challenging dataset, where the detection performance can decrease drastically.
arXiv Detail & Related papers (2021-01-05T11:10:32Z) - Identity-Driven DeepFake Detection [91.0504621868628]
Identity-Driven DeepFake Detection takes as input the suspect image/video as well as the target identity information.
We output a decision on whether the identity in the suspect image/video is the same as the target identity.
We present a simple identity-based detection algorithm called the OuterFace, which may serve as a baseline for further research.
arXiv Detail & Related papers (2020-12-07T18:59:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.