Detecting Face2Face Facial Reenactment in Videos
- URL: http://arxiv.org/abs/2001.07444v1
- Date: Tue, 21 Jan 2020 11:03:50 GMT
- Title: Detecting Face2Face Facial Reenactment in Videos
- Authors: Prabhat Kumar, Mayank Vatsa and Richa Singh
- Abstract summary: This research proposes a learning-based algorithm for detecting reenactment based alterations.
The proposed algorithm uses a multi-stream network that learns regional artifacts and provides a robust performance at various compression levels.
The results show state-of-the-art classification accuracy of 99.96%, 99.10%, and 91.20% for no, easy, and hard compression factors, respectively.
- Score: 76.9573023955201
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual content has become the primary source of information, as evident in
the billions of images and videos, shared and uploaded on the Internet every
single day. This has led to an increase in alterations in images and videos to
make them more informative and eye-catching for the viewers worldwide. Some of
these alterations are simple, like copy-move, and are easily detectable, while
other sophisticated alterations like reenactment based DeepFakes are hard to
detect. Reenactment alterations allow the source to change the target
expressions and create photo-realistic images and videos. While technology can
be potentially used for several applications, the malicious usage of automatic
reenactment has a very large social implication. It is therefore important to
develop detection techniques to distinguish real images and videos with the
altered ones. This research proposes a learning-based algorithm for detecting
reenactment based alterations. The proposed algorithm uses a multi-stream
network that learns regional artifacts and provides a robust performance at
various compression levels. We also propose a loss function for the balanced
learning of the streams for the proposed network. The performance is evaluated
on the publicly available FaceForensics dataset. The results show
state-of-the-art classification accuracy of 99.96%, 99.10%, and 91.20% for no,
easy, and hard compression factors, respectively.
Related papers
- Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - Deepfake Detection of Occluded Images Using a Patch-based Approach [1.6114012813668928]
We present a deep learning approach using the entire face and face patches to distinguish real/fake images in the presence of obstruction.
For producing fake images, StyleGAN and StyleGAN2 are trained by FFHQ images and also StarGAN and PGGAN are trained by CelebA images.
The proposed approach reaches higher results in early epochs than other methods and increases the SoTA results by 0.4%-7.9% in the different built data-sets.
arXiv Detail & Related papers (2023-04-10T12:12:14Z) - Deep Convolutional Pooling Transformer for Deepfake Detection [54.10864860009834]
We propose a deep convolutional Transformer to incorporate decisive image features both locally and globally.
Specifically, we apply convolutional pooling and re-attention to enrich the extracted features and enhance efficacy.
The proposed solution consistently outperforms several state-of-the-art baselines on both within- and cross-dataset experiments.
arXiv Detail & Related papers (2022-09-12T15:05:41Z) - Video Manipulations Beyond Faces: A Dataset with Human-Machine Analysis [60.13902294276283]
We present VideoSham, a dataset consisting of 826 videos (413 real and 413 manipulated).
Many of the existing deepfake datasets focus exclusively on two types of facial manipulations -- swapping with a different subject's face or altering the existing face.
Our analysis shows that state-of-the-art manipulation detection algorithms only work for a few specific attacks and do not scale well on VideoSham.
arXiv Detail & Related papers (2022-07-26T17:39:04Z) - Practical Deepfake Detection: Vulnerabilities in Global Contexts [1.6114012813668934]
Deep learning has enabled digital alterations to videos, known as deepfakes.
This technology raises important societal concerns regarding disinformation and authenticity.
We simulate data corruption techniques and examine the performance of a state-of-the-art deepfake detection algorithm on corrupted variants of the FaceForensics++ dataset.
arXiv Detail & Related papers (2022-06-20T15:24:55Z) - Identity-Driven DeepFake Detection [91.0504621868628]
Identity-Driven DeepFake Detection takes as input the suspect image/video as well as the target identity information.
We output a decision on whether the identity in the suspect image/video is the same as the target identity.
We present a simple identity-based detection algorithm called the OuterFace, which may serve as a baseline for further research.
arXiv Detail & Related papers (2020-12-07T18:59:08Z) - ID-Reveal: Identity-aware DeepFake Video Detection [24.79483180234883]
ID-Reveal is a new approach that learns temporal facial features, specific of how a person moves while talking.
We do not need any training data of fakes, but only train on real videos.
We obtain an average improvement of more than 15% in terms of accuracy for facial reenactment on high compressed videos.
arXiv Detail & Related papers (2020-12-04T10:43:16Z) - What makes fake images detectable? Understanding properties that
generalize [55.4211069143719]
Deep networks can still pick up on subtle artifacts in doctored images.
We seek to understand what properties of fake images make them detectable.
We show a technique to exaggerate these detectable properties.
arXiv Detail & Related papers (2020-08-24T17:50:28Z) - Learning Transformation-Aware Embeddings for Image Forensics [15.484408315588569]
Image Provenance Analysis aims at discovering relationships among different manipulated image versions that share content.
One of the main sub-problems for provenance analysis that has not yet been addressed directly is the edit ordering of images that share full content or are near-duplicates.
This paper introduces a novel deep learning-based approach to provide a plausible ordering to images that have been generated from a single image through transformations.
arXiv Detail & Related papers (2020-01-13T22:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.