Dynamic texture analysis for detecting fake faces in video sequences
- URL: http://arxiv.org/abs/2007.15271v1
- Date: Thu, 30 Jul 2020 07:21:24 GMT
- Title: Dynamic texture analysis for detecting fake faces in video sequences
- Authors: Mattia Bonomi and Cecilia Pasquini and Giulia Boato
- Abstract summary: This work explores the analysis of texture-temporal dynamics of the video signal.
The goal is to characterizing and distinguishing real fake sequences.
We propose to build multiple binary decision on the joint analysis of temporal segments.
- Score: 6.1356022122903235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The creation of manipulated multimedia content involving human characters has
reached in the last years unprecedented realism, calling for automated
techniques to expose synthetically generated faces in images and videos. This
work explores the analysis of spatio-temporal texture dynamics of the video
signal, with the goal of characterizing and distinguishing real and fake
sequences. We propose to build a binary decision on the joint analysis of
multiple temporal segments and, in contrast to previous approaches, to exploit
the textural dynamics of both the spatial and temporal dimensions. This is
achieved through the use of Local Derivative Patterns on Three Orthogonal
Planes (LDP-TOP), a compact feature representation known to be an important
asset for the detection of face spoofing attacks. Experimental analyses on
state-of-the-art datasets of manipulated videos show the discriminative power
of such descriptors in separating real and fake sequences, and also identifying
the creation method used. Linear Support Vector Machines (SVMs) are used which,
despite the lower complexity, yield comparable performance to previously
proposed deep models for fake content detection.
Related papers
- UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - Compressed Deepfake Video Detection Based on 3D Spatiotemporal Trajectories [10.913345858983275]
Deepfake technology by malicious actors poses a potential threat to nations, societies, and individuals.
In this paper, we propose a deepfake video detection method based on 3Dtemporal motion features.
Our method yields satisfactory results and showcases its potential for practical applications.
arXiv Detail & Related papers (2024-04-28T11:48:13Z) - Diffusion Priors for Dynamic View Synthesis from Monocular Videos [59.42406064983643]
Dynamic novel view synthesis aims to capture the temporal evolution of visual content within videos.
We first finetune a pretrained RGB-D diffusion model on the video frames using a customization technique.
We distill the knowledge from the finetuned model to a 4D representations encompassing both dynamic and static Neural Radiance Fields.
arXiv Detail & Related papers (2024-01-10T23:26:41Z) - Appearance-Based Refinement for Object-Centric Motion Segmentation [85.2426540999329]
We introduce an appearance-based refinement method that leverages temporal consistency in video streams to correct inaccurate flow-based proposals.
Our approach involves a sequence-level selection mechanism that identifies accurate flow-predicted masks as exemplars.
Its performance is evaluated on multiple video segmentation benchmarks, including DAVIS, YouTube, SegTrackv2, and FBMS-59.
arXiv Detail & Related papers (2023-12-18T18:59:51Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - HighlightMe: Detecting Highlights from Human-Centric Videos [62.265410865423]
We present a domain- and user-preference-agnostic approach to detect highlightable excerpts from human-centric videos.
We use an autoencoder network equipped with spatial-temporal graph convolutions to detect human activities and interactions.
We observe a 4-12% improvement in the mean average precision of matching the human-annotated highlights over state-of-the-art methods.
arXiv Detail & Related papers (2021-10-05T01:18:15Z) - Scene Synthesis via Uncertainty-Driven Attribute Synchronization [52.31834816911887]
This paper introduces a novel neural scene synthesis approach that can capture diverse feature patterns of 3D scenes.
Our method combines the strength of both neural network-based and conventional scene synthesis approaches.
arXiv Detail & Related papers (2021-08-30T19:45:07Z) - Spatio-temporal Features for Generalized Detection of Deepfake Videos [12.453288832098314]
We propose-temporal features, modeled by 3D CNNs, to extend the capabilities to detect new sorts of deep videos.
We show that our approach outperforms existing methods in terms of generalization capabilities.
arXiv Detail & Related papers (2020-10-22T16:28:50Z) - Interpretable and Trustworthy Deepfake Detection via Dynamic Prototypes [20.358053429294458]
We propose a novel human-centered approach for detecting forgery in face images, using dynamic prototypes as a form of visual explanations.
Extensive experimental results show that DPNet achieves competitive predictive performance, even on unseen testing datasets.
arXiv Detail & Related papers (2020-06-28T00:25:34Z) - DeepFake Detection by Analyzing Convolutional Traces [0.0]
We focus on the analysis of Deepfakes of human faces with the objective of creating a new detection method.
The proposed technique, by means of an Expectation Maximization (EM) algorithm, extracts a set of local features specifically addressed to model the underlying convolutional generative process.
Results demonstrated the effectiveness of the technique in distinguishing the different architectures and the corresponding generation process.
arXiv Detail & Related papers (2020-04-22T09:02:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.