Improving the Efficiency and Robustness of Deepfakes Detection through
Precise Geometric Features
- URL: http://arxiv.org/abs/2104.04480v1
- Date: Fri, 9 Apr 2021 16:57:55 GMT
- Title: Improving the Efficiency and Robustness of Deepfakes Detection through
Precise Geometric Features
- Authors: Zekun Sun and Yujie Han and Zeyu Hua and Na Ruan and Weijia Jia
- Abstract summary: Deepfakes is a branch of malicious techniques that transplant a target face to the original one in videos.
Previous efforts for Deepfakes videos detection mainly focused on appearance features, which have a risk of being bypassed by sophisticated manipulation.
We propose an efficient and robust framework named LRNet for detecting Deepfakes videos through temporal modeling on precise geometric features.
- Score: 13.033517345182728
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deepfakes is a branch of malicious techniques that transplant a target face
to the original one in videos, resulting in serious problems such as
infringement of copyright, confusion of information, or even public panic.
Previous efforts for Deepfakes videos detection mainly focused on appearance
features, which have a risk of being bypassed by sophisticated manipulation,
also resulting in high model complexity and sensitiveness to noise. Besides,
how to mine the temporal features of manipulated videos and exploit them is
still an open question. We propose an efficient and robust framework named
LRNet for detecting Deepfakes videos through temporal modeling on precise
geometric features. A novel calibration module is devised to enhance the
precision of geometric features, making it more discriminative, and a
two-stream Recurrent Neural Network (RNN) is constructed for sufficient
exploitation of temporal features. Compared to previous methods, our proposed
method is lighter-weighted and easier to train. Moreover, our method has shown
robustness in detecting highly compressed or noise corrupted videos. Our model
achieved 0.999 AUC on FaceForensics++ dataset. Meanwhile, it has a graceful
decline in performance (-0.042 AUC) when faced with highly compressed videos.
Related papers
- GRACE: Graph-Regularized Attentive Convolutional Entanglement with Laplacian Smoothing for Robust DeepFake Video Detection [7.591187423217017]
This paper introduces a novel method for robust DeepFake video detection based on graph convolutional network with graph Laplacian.
The proposed method delivers state-of-the-art performance in DeepFake video detection under noisy face sequences.
arXiv Detail & Related papers (2024-06-28T14:17:16Z) - Turns Out I'm Not Real: Towards Robust Detection of AI-Generated Videos [16.34393937800271]
generative models in creating high-quality videos have raised concerns about digital integrity and privacy vulnerabilities.
Recent works to combat Deepfakes videos have developed detectors that are highly accurate at identifying GAN-generated samples.
We propose a novel framework for detecting videos synthesized from multiple state-of-the-art (SOTA) generative models.
arXiv Detail & Related papers (2024-06-13T21:52:49Z) - Unmasking Deepfake Faces from Videos Using An Explainable Cost-Sensitive
Deep Learning Approach [0.0]
Deepfake technology is widely used, which has led to serious worries about the authenticity of digital media.
This study employs a resource-effective and transparent cost-sensitive deep learning method to effectively detect deepfake faces in videos.
arXiv Detail & Related papers (2023-12-17T14:57:10Z) - Global Context Aggregation Network for Lightweight Saliency Detection of
Surface Defects [70.48554424894728]
We develop a Global Context Aggregation Network (GCANet) for lightweight saliency detection of surface defects on the encoder-decoder structure.
First, we introduce a novel transformer encoder on the top layer of the lightweight backbone, which captures global context information through a novel Depth-wise Self-Attention (DSA) module.
The experimental results on three public defect datasets demonstrate that the proposed network achieves a better trade-off between accuracy and running efficiency compared with other 17 state-of-the-art methods.
arXiv Detail & Related papers (2023-09-22T06:19:11Z) - NPVForensics: Jointing Non-critical Phonemes and Visemes for Deepfake
Detection [50.33525966541906]
Existing multimodal detection methods capture audio-visual inconsistencies to expose Deepfake videos.
We propose a novel Deepfake detection method to mine the correlation between Non-critical Phonemes and Visemes, termed NPVForensics.
Our model can be easily adapted to the downstream Deepfake datasets with fine-tuning.
arXiv Detail & Related papers (2023-06-12T06:06:05Z) - Deep Convolutional Pooling Transformer for Deepfake Detection [54.10864860009834]
We propose a deep convolutional Transformer to incorporate decisive image features both locally and globally.
Specifically, we apply convolutional pooling and re-attention to enrich the extracted features and enhance efficacy.
The proposed solution consistently outperforms several state-of-the-art baselines on both within- and cross-dataset experiments.
arXiv Detail & Related papers (2022-09-12T15:05:41Z) - The Effectiveness of Temporal Dependency in Deepfake Video Detection [0.0]
This paper investigates whether temporal information can improve the deepfake performance of deep learning models.
We find that temporal dependency produces a statistically significant increase in performance classifying real images for the model.
arXiv Detail & Related papers (2022-05-13T14:39:25Z) - Adversarially robust deepfake media detection using fused convolutional
neural network predictions [79.00202519223662]
Current deepfake detection systems struggle against unseen data.
We employ three different deep Convolutional Neural Network (CNN) models to classify fake and real images extracted from videos.
The proposed technique outperforms state-of-the-art models with 96.5% accuracy.
arXiv Detail & Related papers (2021-02-11T11:28:00Z) - Robust Unsupervised Video Anomaly Detection by Multi-Path Frame
Prediction [61.17654438176999]
We propose a novel and robust unsupervised video anomaly detection method by frame prediction with proper design.
Our proposed method obtains the frame-level AUROC score of 88.3% on the CUHK Avenue dataset.
arXiv Detail & Related papers (2020-11-05T11:34:12Z) - A Plug-and-play Scheme to Adapt Image Saliency Deep Model for Video Data [54.198279280967185]
This paper proposes a novel plug-and-play scheme to weakly retrain a pretrained image saliency deep model for video data.
Our method is simple yet effective for adapting any off-the-shelf pre-trained image saliency deep model to obtain high-quality video saliency detection.
arXiv Detail & Related papers (2020-08-02T13:23:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.