Deepfakes Detection with Automatic Face Weighting
- URL: http://arxiv.org/abs/2004.12027v2
- Date: Mon, 4 May 2020 19:44:49 GMT
- Title: Deepfakes Detection with Automatic Face Weighting
- Authors: Daniel Mas Montserrat, Hanxiang Hao, S. K. Yarlagadda, Sriram
Baireddy, Ruiting Shao, J\'anos Horv\'ath, Emily Bartusiak, Justin Yang,
David G\"uera, Fengqing Zhu, Edward J. Delp
- Abstract summary: We introduce a method based on convolutional neural networks (CNNs) and recurrent neural networks (RNNs) that extracts visual and temporal features from faces present in videos to accurately detect manipulations.
The method is evaluated with the Deepfake Detection Challenge dataset, providing competitive results compared to other techniques.
- Score: 21.723416806728668
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Altered and manipulated multimedia is increasingly present and widely
distributed via social media platforms. Advanced video manipulation tools
enable the generation of highly realistic-looking altered multimedia. While
many methods have been presented to detect manipulations, most of them fail
when evaluated with data outside of the datasets used in research environments.
In order to address this problem, the Deepfake Detection Challenge (DFDC)
provides a large dataset of videos containing realistic manipulations and an
evaluation system that ensures that methods work quickly and accurately, even
when faced with challenging data. In this paper, we introduce a method based on
convolutional neural networks (CNNs) and recurrent neural networks (RNNs) that
extracts visual and temporal features from faces present in videos to
accurately detect manipulations. The method is evaluated with the DFDC dataset,
providing competitive results compared to other techniques.
Related papers
- Contextual Cross-Modal Attention for Audio-Visual Deepfake Detection and Localization [3.9440964696313485]
In the digital age, the emergence of deepfakes and synthetic media presents a significant threat to societal and political integrity.
Deepfakes based on multi-modal manipulation, such as audio-visual, are more realistic and pose a greater threat.
We propose a novel multi-modal attention framework based on recurrent neural networks (RNNs) that leverages contextual information for audio-visual deepfake detection.
arXiv Detail & Related papers (2024-08-02T18:45:01Z) - Unmasking Deepfake Faces from Videos Using An Explainable Cost-Sensitive
Deep Learning Approach [0.0]
Deepfake technology is widely used, which has led to serious worries about the authenticity of digital media.
This study employs a resource-effective and transparent cost-sensitive deep learning method to effectively detect deepfake faces in videos.
arXiv Detail & Related papers (2023-12-17T14:57:10Z) - AVTENet: Audio-Visual Transformer-based Ensemble Network Exploiting
Multiple Experts for Video Deepfake Detection [53.448283629898214]
The recent proliferation of hyper-realistic deepfake videos has drawn attention to the threat of audio and visual forgeries.
Most previous work on detecting AI-generated fake videos only utilize visual modality or audio modality.
We propose an Audio-Visual Transformer-based Ensemble Network (AVTENet) framework that considers both acoustic manipulation and visual manipulation.
arXiv Detail & Related papers (2023-10-19T19:01:26Z) - CrossDF: Improving Cross-Domain Deepfake Detection with Deep Information Decomposition [53.860796916196634]
We propose a Deep Information Decomposition (DID) framework to enhance the performance of Cross-dataset Deepfake Detection (CrossDF)
Unlike most existing deepfake detection methods, our framework prioritizes high-level semantic features over specific visual artifacts.
It adaptively decomposes facial features into deepfake-related and irrelevant information, only using the intrinsic deepfake-related information for real/fake discrimination.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - Mitigating Representation Bias in Action Recognition: Algorithms and
Benchmarks [76.35271072704384]
Deep learning models perform poorly when applied to videos with rare scenes or objects.
We tackle this problem from two different angles: algorithm and dataset.
We show that the debiased representation can generalize better when transferred to other datasets and tasks.
arXiv Detail & Related papers (2022-09-20T00:30:35Z) - Deep Convolutional Pooling Transformer for Deepfake Detection [54.10864860009834]
We propose a deep convolutional Transformer to incorporate decisive image features both locally and globally.
Specifically, we apply convolutional pooling and re-attention to enrich the extracted features and enhance efficacy.
The proposed solution consistently outperforms several state-of-the-art baselines on both within- and cross-dataset experiments.
arXiv Detail & Related papers (2022-09-12T15:05:41Z) - A Hybrid CNN-LSTM model for Video Deepfake Detection by Leveraging
Optical Flow Features [0.0]
Deepfakes are the synthesized digital media in order to create ultra-realistic fake videos to trick the spectator.
In this paper, we leveraged an optical flow based feature extraction approach to extract the temporal features, which are then fed to a hybrid model for classification.
The hybrid model provides effective performance on open source data-sets such as, DFDC, FF++ and Celeb-DF.
arXiv Detail & Related papers (2022-07-28T09:38:09Z) - M2TR: Multi-modal Multi-scale Transformers for Deepfake Detection [74.19291916812921]
forged images generated by Deepfake techniques pose a serious threat to the trustworthiness of digital information.
In this paper, we aim to capture the subtle manipulation artifacts at different scales for Deepfake detection.
We introduce a high-quality Deepfake dataset, SR-DF, which consists of 4,000 DeepFake videos generated by state-of-the-art face swapping and facial reenactment methods.
arXiv Detail & Related papers (2021-04-20T05:43:44Z) - Improving DeepFake Detection Using Dynamic Face Augmentation [0.8793721044482612]
Most publicly available DeepFake detection datasets have limited variations.
Deep neural networks tend to overfit to the facial features instead of learning to detect manipulation features of DeepFake content.
We introduce Face-Cutout, a data augmentation method for training Convolutional Neural Networks (CNN) to improve DeepFake detection.
arXiv Detail & Related papers (2021-02-18T20:25:45Z) - Training Strategies and Data Augmentations in CNN-based DeepFake Video
Detection [17.696134665850447]
The accuracy of automated systems for face forgery detection in videos is still quite limited and generally biased toward the dataset used to design and train a specific detection system.
In this paper we analyze how different training strategies and data augmentation techniques affect CNN-based deepfake detectors when training and testing on the same dataset or across different datasets.
arXiv Detail & Related papers (2020-11-16T08:50:56Z) - Emotions Don't Lie: An Audio-Visual Deepfake Detection Method Using
Affective Cues [75.1731999380562]
We present a learning-based method for detecting real and fake deepfake multimedia content.
We extract and analyze the similarity between the two audio and visual modalities from within the same video.
We compare our approach with several SOTA deepfake detection methods and report per-video AUC of 84.4% on the DFDC and 96.6% on the DF-TIMIT datasets.
arXiv Detail & Related papers (2020-03-14T22:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.