The Effectiveness of Temporal Dependency in Deepfake Video Detection
- URL: http://arxiv.org/abs/2205.06684v1
- Date: Fri, 13 May 2022 14:39:25 GMT
- Title: The Effectiveness of Temporal Dependency in Deepfake Video Detection
- Authors: Will Rowan and Nick Pears
- Abstract summary: This paper investigates whether temporal information can improve the deepfake performance of deep learning models.
We find that temporal dependency produces a statistically significant increase in performance classifying real images for the model.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deepfakes are a form of synthetic image generation used to generate fake
videos of individuals for malicious purposes. The resulting videos may be used
to spread misinformation, reduce trust in media, or as a form of blackmail.
These threats necessitate automated methods of deepfake video detection. This
paper investigates whether temporal information can improve the deepfake
detection performance of deep learning models.
To investigate this, we propose a framework that classifies new and existing
approaches by their defining characteristics. These are the types of feature
extraction: automatic or manual, and the temporal relationship between frames:
dependent or independent. We apply this framework to investigate the effect of
temporal dependency on a model's deepfake detection performance.
We find that temporal dependency produces a statistically significant (p <
0.05) increase in performance in classifying real images for the model using
automatic feature selection, demonstrating that spatio-temporal information can
increase the performance of deepfake video detection models.
Related papers
- Shaking the Fake: Detecting Deepfake Videos in Real Time via Active Probes [3.6308756891251392]
Real-time deepfake, a type of generative AI, is capable of "creating" non-existing contents (e.g., swapping one's face with another) in a video.
It has been misused to produce deepfake videos for malicious purposes, including financial scams and political misinformation.
We propose SFake, a new real-time deepfake detection method that exploits deepfake models' inability to adapt to physical interference.
arXiv Detail & Related papers (2024-09-17T04:58:30Z) - CapST: An Enhanced and Lightweight Model Attribution Approach for
Synthetic Videos [9.209808258321559]
This paper investigates the model attribution problem of Deepfake videos from a recently proposed dataset, Deepfakes from Different Models (DFDM)
The dataset comprises 6,450 Deepfake videos generated by five distinct models with variations in encoder, decoder, intermediate layer, input resolution, and compression ratio.
Experimental results on the deepfake benchmark dataset (DFDM) demonstrate the efficacy of our proposed method, achieving up to a 4% improvement in accurately categorizing deepfake videos.
arXiv Detail & Related papers (2023-11-07T08:05:09Z) - CrossDF: Improving Cross-Domain Deepfake Detection with Deep Information Decomposition [53.860796916196634]
We propose a Deep Information Decomposition (DID) framework to enhance the performance of Cross-dataset Deepfake Detection (CrossDF)
Unlike most existing deepfake detection methods, our framework prioritizes high-level semantic features over specific visual artifacts.
It adaptively decomposes facial features into deepfake-related and irrelevant information, only using the intrinsic deepfake-related information for real/fake discrimination.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - Deep Convolutional Pooling Transformer for Deepfake Detection [54.10864860009834]
We propose a deep convolutional Transformer to incorporate decisive image features both locally and globally.
Specifically, we apply convolutional pooling and re-attention to enrich the extracted features and enhance efficacy.
The proposed solution consistently outperforms several state-of-the-art baselines on both within- and cross-dataset experiments.
arXiv Detail & Related papers (2022-09-12T15:05:41Z) - Voice-Face Homogeneity Tells Deepfake [56.334968246631725]
Existing detection approaches contribute to exploring the specific artifacts in deepfake videos.
We propose to perform the deepfake detection from an unexplored voice-face matching view.
Our model obtains significantly improved performance as compared to other state-of-the-art competitors.
arXiv Detail & Related papers (2022-03-04T09:08:50Z) - Model Attribution of Face-swap Deepfake Videos [39.771800841412414]
We first introduce a new dataset with DeepFakes from Different Models (DFDM) based on several Autoencoder models.
Specifically, five generation models with variations in encoder, decoder, intermediate layer, input resolution, and compression ratio have been used to generate a total of 6,450 Deepfake videos.
We take Deepfakes model attribution as a multiclass classification task and propose a spatial and temporal attention based method to explore the differences among Deepfakes.
arXiv Detail & Related papers (2022-02-25T20:05:18Z) - Video Salient Object Detection via Contrastive Features and Attention
Modules [106.33219760012048]
We propose a network with attention modules to learn contrastive features for video salient object detection.
A co-attention formulation is utilized to combine the low-level and high-level features.
We show that the proposed method requires less computation, and performs favorably against the state-of-the-art approaches.
arXiv Detail & Related papers (2021-11-03T17:40:32Z) - Improving the Efficiency and Robustness of Deepfakes Detection through
Precise Geometric Features [13.033517345182728]
Deepfakes is a branch of malicious techniques that transplant a target face to the original one in videos.
Previous efforts for Deepfakes videos detection mainly focused on appearance features, which have a risk of being bypassed by sophisticated manipulation.
We propose an efficient and robust framework named LRNet for detecting Deepfakes videos through temporal modeling on precise geometric features.
arXiv Detail & Related papers (2021-04-09T16:57:55Z) - A Plug-and-play Scheme to Adapt Image Saliency Deep Model for Video Data [54.198279280967185]
This paper proposes a novel plug-and-play scheme to weakly retrain a pretrained image saliency deep model for video data.
Our method is simple yet effective for adapting any off-the-shelf pre-trained image saliency deep model to obtain high-quality video saliency detection.
arXiv Detail & Related papers (2020-08-02T13:23:14Z) - Deepfake Detection using Spatiotemporal Convolutional Networks [0.0]
Deepfake detection methods only use individual frames and therefore fail to learn from temporal information.
We created a benchmark of performance using Celeb-DF dataset.
Our methods outperformed state-of-theart frame-based detection methods.
arXiv Detail & Related papers (2020-06-26T01:32:31Z) - VideoForensicsHQ: Detecting High-quality Manipulated Face Videos [77.60295082172098]
We show how the performance of forgery detectors depends on the presence of artefacts that the human eye can see.
We introduce a new benchmark dataset for face video forgery detection, of unprecedented quality.
arXiv Detail & Related papers (2020-05-20T21:17:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.