Training Strategies and Data Augmentations in CNN-based DeepFake Video
Detection
- URL: http://arxiv.org/abs/2011.07792v1
- Date: Mon, 16 Nov 2020 08:50:56 GMT
- Title: Training Strategies and Data Augmentations in CNN-based DeepFake Video
Detection
- Authors: Luca Bondi, Edoardo Daniele Cannas, Paolo Bestagini, Stefano Tubaro
- Abstract summary: The accuracy of automated systems for face forgery detection in videos is still quite limited and generally biased toward the dataset used to design and train a specific detection system.
In this paper we analyze how different training strategies and data augmentation techniques affect CNN-based deepfake detectors when training and testing on the same dataset or across different datasets.
- Score: 17.696134665850447
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The fast and continuous growth in number and quality of deepfake videos calls
for the development of reliable detection systems capable of automatically
warning users on social media and on the Internet about the potential
untruthfulness of such contents. While algorithms, software, and smartphone
apps are getting better every day in generating manipulated videos and swapping
faces, the accuracy of automated systems for face forgery detection in videos
is still quite limited and generally biased toward the dataset used to design
and train a specific detection system. In this paper we analyze how different
training strategies and data augmentation techniques affect CNN-based deepfake
detectors when training and testing on the same dataset or across different
datasets.
Related papers
- The Tug-of-War Between Deepfake Generation and Detection [4.62070292702111]
Multimodal generative models are rapidly evolving, leading to a surge in the generation of realistic video and audio.
Deepfake videos, which can convincingly impersonate individuals, have particularly garnered attention due to their potential misuse.
This survey paper examines the dual landscape of deepfake video generation and detection, emphasizing the need for effective countermeasures.
arXiv Detail & Related papers (2024-07-08T17:49:41Z) - Analysis of Real-Time Hostile Activitiy Detection from Spatiotemporal
Features Using Time Distributed Deep CNNs, RNNs and Attention-Based
Mechanisms [0.0]
Real-time video surveillance, through CCTV camera systems has become essential for ensuring public safety.
Deep learning video classification techniques can help us automate surveillance systems to detect violence as it happens.
arXiv Detail & Related papers (2023-02-21T22:02:39Z) - Deep Convolutional Pooling Transformer for Deepfake Detection [54.10864860009834]
We propose a deep convolutional Transformer to incorporate decisive image features both locally and globally.
Specifically, we apply convolutional pooling and re-attention to enrich the extracted features and enhance efficacy.
The proposed solution consistently outperforms several state-of-the-art baselines on both within- and cross-dataset experiments.
arXiv Detail & Related papers (2022-09-12T15:05:41Z) - Practical Deepfake Detection: Vulnerabilities in Global Contexts [1.6114012813668934]
Deep learning has enabled digital alterations to videos, known as deepfakes.
This technology raises important societal concerns regarding disinformation and authenticity.
We simulate data corruption techniques and examine the performance of a state-of-the-art deepfake detection algorithm on corrupted variants of the FaceForensics++ dataset.
arXiv Detail & Related papers (2022-06-20T15:24:55Z) - Audio-Visual Person-of-Interest DeepFake Detection [77.04789677645682]
The aim of this work is to propose a deepfake detector that can cope with the wide variety of manipulation methods and scenarios encountered in the real world.
We leverage a contrastive learning paradigm to learn the moving-face and audio segment embeddings that are most discriminative for each identity.
Our method can detect both single-modality (audio-only, video-only) and multi-modality (audio-video) attacks, and is robust to low-quality or corrupted videos.
arXiv Detail & Related papers (2022-04-06T20:51:40Z) - Human-in-the-Loop Disinformation Detection: Stance, Sentiment, or
Something Else? [93.91375268580806]
Both politics and pandemics have recently provided ample motivation for the development of machine learning-enabled disinformation (a.k.a. fake news) detection algorithms.
Existing literature has focused primarily on the fully-automated case, but the resulting techniques cannot reliably detect disinformation on the varied topics, sources, and time scales required for military applications.
By leveraging an already-available analyst as a human-in-the-loop, canonical machine learning techniques of sentiment analysis, aspect-based sentiment analysis, and stance detection become plausible methods to use for a partially-automated disinformation detection system.
arXiv Detail & Related papers (2021-11-09T13:30:34Z) - Improving DeepFake Detection Using Dynamic Face Augmentation [0.8793721044482612]
Most publicly available DeepFake detection datasets have limited variations.
Deep neural networks tend to overfit to the facial features instead of learning to detect manipulation features of DeepFake content.
We introduce Face-Cutout, a data augmentation method for training Convolutional Neural Networks (CNN) to improve DeepFake detection.
arXiv Detail & Related papers (2021-02-18T20:25:45Z) - Adversarially robust deepfake media detection using fused convolutional
neural network predictions [79.00202519223662]
Current deepfake detection systems struggle against unseen data.
We employ three different deep Convolutional Neural Network (CNN) models to classify fake and real images extracted from videos.
The proposed technique outperforms state-of-the-art models with 96.5% accuracy.
arXiv Detail & Related papers (2021-02-11T11:28:00Z) - Deepfakes Detection with Automatic Face Weighting [21.723416806728668]
We introduce a method based on convolutional neural networks (CNNs) and recurrent neural networks (RNNs) that extracts visual and temporal features from faces present in videos to accurately detect manipulations.
The method is evaluated with the Deepfake Detection Challenge dataset, providing competitive results compared to other techniques.
arXiv Detail & Related papers (2020-04-25T00:47:42Z) - Emotions Don't Lie: An Audio-Visual Deepfake Detection Method Using
Affective Cues [75.1731999380562]
We present a learning-based method for detecting real and fake deepfake multimedia content.
We extract and analyze the similarity between the two audio and visual modalities from within the same video.
We compare our approach with several SOTA deepfake detection methods and report per-video AUC of 84.4% on the DFDC and 96.6% on the DF-TIMIT datasets.
arXiv Detail & Related papers (2020-03-14T22:07:26Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.