Adversarially robust deepfake media detection using fused convolutional
neural network predictions
- URL: http://arxiv.org/abs/2102.05950v1
- Date: Thu, 11 Feb 2021 11:28:00 GMT
- Title: Adversarially robust deepfake media detection using fused convolutional
neural network predictions
- Authors: Sohail Ahmed Khan, Alessandro Artusi, Hang Dai
- Abstract summary: Current deepfake detection systems struggle against unseen data.
We employ three different deep Convolutional Neural Network (CNN) models to classify fake and real images extracted from videos.
The proposed technique outperforms state-of-the-art models with 96.5% accuracy.
- Score: 79.00202519223662
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deepfakes are synthetically generated images, videos or audios, which
fraudsters use to manipulate legitimate information. Current deepfake detection
systems struggle against unseen data. To address this, we employ three
different deep Convolutional Neural Network (CNN) models, (1) VGG16, (2)
InceptionV3, and (3) XceptionNet to classify fake and real images extracted
from videos. We also constructed a fusion of the deep CNN models to improve the
robustness and generalisation capability. The proposed technique outperforms
state-of-the-art models with 96.5% accuracy, when tested on publicly available
DeepFake Detection Challenge (DFDC) test data, comprising of 400 videos. The
fusion model achieves 99% accuracy on lower quality DeepFake-TIMIT dataset
videos and 91.88% on higher quality DeepFake-TIMIT videos. In addition to this,
we prove that prediction fusion is more robust against adversarial attacks. If
one model is compromised by an adversarial attack, the prediction fusion does
not let it affect the overall classification.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - Herd Mentality in Augmentation -- Not a Good Idea! A Robust Multi-stage Approach towards Deepfake Detection [0.0]
Deepfake technology has raised significant concerns about digital media integrity.
Most standard image classifiers fail to distinguish between fake and real faces.
We propose an enhanced architecture based on the GenConViT model, which incorporates weighted loss and update augmentation techniques.
This proposed model improves the F1 score by 1.71% and the accuracy by 4.34% on the Celeb-DF v2 dataset.
arXiv Detail & Related papers (2024-10-07T19:51:46Z) - Unmasking Deepfake Faces from Videos Using An Explainable Cost-Sensitive
Deep Learning Approach [0.0]
Deepfake technology is widely used, which has led to serious worries about the authenticity of digital media.
This study employs a resource-effective and transparent cost-sensitive deep learning method to effectively detect deepfake faces in videos.
arXiv Detail & Related papers (2023-12-17T14:57:10Z) - Deepfake Video Detection Using Generative Convolutional Vision
Transformer [3.8297637120486496]
We propose a Generative Convolutional Vision Transformer (GenConViT) for deepfake video detection.
Our model combines ConvNeXt and Swin Transformer models for feature extraction.
By learning from the visual artifacts and latent data distribution, GenConViT achieves improved performance in detecting a wide range of deepfake videos.
arXiv Detail & Related papers (2023-07-13T19:27:40Z) - Deep Convolutional Pooling Transformer for Deepfake Detection [54.10864860009834]
We propose a deep convolutional Transformer to incorporate decisive image features both locally and globally.
Specifically, we apply convolutional pooling and re-attention to enrich the extracted features and enhance efficacy.
The proposed solution consistently outperforms several state-of-the-art baselines on both within- and cross-dataset experiments.
arXiv Detail & Related papers (2022-09-12T15:05:41Z) - Metamorphic Testing-based Adversarial Attack to Fool Deepfake Detectors [2.0649235321315285]
There is a dire need for deepfake detection technology to help spot deepfake media.
Current deepfake detection models are able to achieve outstanding accuracy (>90%)
This study identifies makeup application as an adversarial attack that could fool deepfake detectors.
arXiv Detail & Related papers (2022-04-19T02:24:30Z) - Voice-Face Homogeneity Tells Deepfake [56.334968246631725]
Existing detection approaches contribute to exploring the specific artifacts in deepfake videos.
We propose to perform the deepfake detection from an unexplored voice-face matching view.
Our model obtains significantly improved performance as compared to other state-of-the-art competitors.
arXiv Detail & Related papers (2022-03-04T09:08:50Z) - Deepfake Detection Scheme Based on Vision Transformer and Distillation [4.716110829725784]
We propose a Vision Transformer model with distillation methodology for detecting fake videos.
We verify that the proposed scheme with patch embedding as input outperforms the state-of-the-art using the combined CNN features.
arXiv Detail & Related papers (2021-04-03T09:13:05Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.