Practical Deepfake Detection: Vulnerabilities in Global Contexts
- URL: http://arxiv.org/abs/2206.09842v1
- Date: Mon, 20 Jun 2022 15:24:55 GMT
- Title: Practical Deepfake Detection: Vulnerabilities in Global Contexts
- Authors: Yang A. Chuming, Daniel J. Wu, Ken Hong
- Abstract summary: Deep learning has enabled digital alterations to videos, known as deepfakes.
This technology raises important societal concerns regarding disinformation and authenticity.
We simulate data corruption techniques and examine the performance of a state-of-the-art deepfake detection algorithm on corrupted variants of the FaceForensics++ dataset.
- Score: 1.6114012813668934
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent advances in deep learning have enabled realistic digital alterations
to videos, known as deepfakes. This technology raises important societal
concerns regarding disinformation and authenticity, galvanizing the development
of numerous deepfake detection algorithms. At the same time, there are
significant differences between training data and in-the-wild video data, which
may undermine their practical efficacy. We simulate data corruption techniques
and examine the performance of a state-of-the-art deepfake detection algorithm
on corrupted variants of the FaceForensics++ dataset.
While deepfake detection models are robust against video corruptions that
align with training-time augmentations, we find that they remain vulnerable to
video corruptions that simulate decreases in video quality. Indeed, in the
controversial case of the video of Gabonese President Bongo's new year address,
the algorithm, which confidently authenticates the original video, judges
highly corrupted variants of the video to be fake. Our work opens up both
technical and ethical avenues of exploration into practical deepfake detection
in global contexts.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - Comparative Analysis of Deep-Fake Algorithms [0.0]
Deepfakes, also known as deep learning-based fake videos, have become a major concern in recent years.
These deepfake videos can be used for malicious purposes such as spreading misinformation, impersonating individuals, and creating fake news.
Deepfake detection technologies use various approaches such as facial recognition, motion analysis, and audio-visual synchronization.
arXiv Detail & Related papers (2023-09-06T18:17:47Z) - Detecting Deepfake by Creating Spatio-Temporal Regularity Disruption [94.5031244215761]
We propose to boost the generalization of deepfake detection by distinguishing the "regularity disruption" that does not appear in real videos.
Specifically, by carefully examining the spatial and temporal properties, we propose to disrupt a real video through a Pseudo-fake Generator.
Such practice allows us to achieve deepfake detection without using fake videos and improves the generalization ability in a simple and efficient manner.
arXiv Detail & Related papers (2022-07-21T10:42:34Z) - Voice-Face Homogeneity Tells Deepfake [56.334968246631725]
Existing detection approaches contribute to exploring the specific artifacts in deepfake videos.
We propose to perform the deepfake detection from an unexplored voice-face matching view.
Our model obtains significantly improved performance as compared to other state-of-the-art competitors.
arXiv Detail & Related papers (2022-03-04T09:08:50Z) - Deepfake Videos in the Wild: Analysis and Detection [6.246677573849458]
We present the largest dataset of deepfake videos in the wild, containing 1,869 videos from YouTube and Bilibili, and extract over 4.8M frames of content.
Second, we present a comprehensive analysis of the growth patterns, popularity, creators, manipulation strategies, and production methods of deepfake content in the real-world.
Third, we systematically evaluate existing defenses using our new dataset, and observe that they are not ready for deployment in the real-world.
arXiv Detail & Related papers (2021-03-07T04:40:15Z) - Adversarially robust deepfake media detection using fused convolutional
neural network predictions [79.00202519223662]
Current deepfake detection systems struggle against unseen data.
We employ three different deep Convolutional Neural Network (CNN) models to classify fake and real images extracted from videos.
The proposed technique outperforms state-of-the-art models with 96.5% accuracy.
arXiv Detail & Related papers (2021-02-11T11:28:00Z) - WildDeepfake: A Challenging Real-World Dataset for Deepfake Detection [82.42495493102805]
We introduce a new dataset WildDeepfake which consists of 7,314 face sequences extracted from 707 deepfake videos collected completely from the internet.
We conduct a systematic evaluation of a set of baseline detection networks on both existing and our WildDeepfake datasets, and show that WildDeepfake is indeed a more challenging dataset, where the detection performance can decrease drastically.
arXiv Detail & Related papers (2021-01-05T11:10:32Z) - Deepfake Video Forensics based on Transfer Learning [0.0]
"Deepfake" can create fake images and videos that humans cannot differentiate from the genuine ones.
This paper details retraining the image classification models to apprehend the features from each deepfake video frames.
When checking Deepfake videos, this technique received more than 87 per cent accuracy.
arXiv Detail & Related papers (2020-04-29T13:21:28Z) - Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to
Adversarial Examples [23.695497512694068]
Recent advances in video manipulation techniques have made the generation of fake videos more accessible than ever before.
Manipulated videos can fuel disinformation and reduce trust in media.
Recent developed Deepfake detection methods rely on deep neural networks (DNNs) to distinguish AI-generated fake videos from real videos.
arXiv Detail & Related papers (2020-02-09T07:10:58Z) - Detecting Face2Face Facial Reenactment in Videos [76.9573023955201]
This research proposes a learning-based algorithm for detecting reenactment based alterations.
The proposed algorithm uses a multi-stream network that learns regional artifacts and provides a robust performance at various compression levels.
The results show state-of-the-art classification accuracy of 99.96%, 99.10%, and 91.20% for no, easy, and hard compression factors, respectively.
arXiv Detail & Related papers (2020-01-21T11:03:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.