How Do Deepfakes Move? Motion Magnification for Deepfake Source
Detection
- URL: http://arxiv.org/abs/2212.14033v1
- Date: Wed, 28 Dec 2022 18:59:21 GMT
- Title: How Do Deepfakes Move? Motion Magnification for Deepfake Source
Detection
- Authors: Umur Aybars Ciftci, Ilke Demir
- Abstract summary: We build a generalized deepfake source detector based on sub-muscular motion in faces.
Our approach exploits the difference between real motion and the amplified GAN fingerprints.
We evaluate our approach on two multi-source datasets.
- Score: 4.567475511774088
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the proliferation of deep generative models, deepfakes are improving in
quality and quantity everyday. However, there are subtle authenticity signals
in pristine videos, not replicated by SOTA GANs. We contrast the movement in
deepfakes and authentic videos by motion magnification towards building a
generalized deepfake source detector. The sub-muscular motion in faces has
different interpretations per different generative models which is reflected in
their generative residue. Our approach exploits the difference between real
motion and the amplified GAN fingerprints, by combining deep and traditional
motion magnification, to detect whether a video is fake and its source
generator if so. Evaluating our approach on two multi-source datasets, we
obtain 97.17% and 94.03% for video source detection. We compare against the
prior deepfake source detector and other complex architectures. We also analyze
the importance of magnification amount, phase extraction window, backbone
network architecture, sample counts, and sample lengths. Finally, we report our
results for different skin tones to assess the bias.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - Undercover Deepfakes: Detecting Fake Segments in Videos [1.2609216345578933]
deepfake generation is a new paradigm of deepfakes which are mostly real videos altered slightly to distort the truth.
In this paper, we present a deepfake detection method that can address this issue by performing deepfake prediction at the frame and video levels.
In particular, the paradigm we address will form a powerful tool for the moderation of deepfakes, where human oversight can be better targeted to the parts of videos suspected of being deepfakes.
arXiv Detail & Related papers (2023-05-11T04:43:10Z) - DeePhy: On Deepfake Phylogeny [58.01631614114075]
DeePhy is a novel Deepfake Phylogeny dataset which consists of 5040 deepfake videos generated using three different generation techniques.
We present the benchmark on DeePhy dataset using six deepfake detection algorithms.
arXiv Detail & Related papers (2022-09-19T15:30:33Z) - Deep Convolutional Pooling Transformer for Deepfake Detection [54.10864860009834]
We propose a deep convolutional Transformer to incorporate decisive image features both locally and globally.
Specifically, we apply convolutional pooling and re-attention to enrich the extracted features and enhance efficacy.
The proposed solution consistently outperforms several state-of-the-art baselines on both within- and cross-dataset experiments.
arXiv Detail & Related papers (2022-09-12T15:05:41Z) - Voice-Face Homogeneity Tells Deepfake [56.334968246631725]
Existing detection approaches contribute to exploring the specific artifacts in deepfake videos.
We propose to perform the deepfake detection from an unexplored voice-face matching view.
Our model obtains significantly improved performance as compared to other state-of-the-art competitors.
arXiv Detail & Related papers (2022-03-04T09:08:50Z) - Model Attribution of Face-swap Deepfake Videos [39.771800841412414]
We first introduce a new dataset with DeepFakes from Different Models (DFDM) based on several Autoencoder models.
Specifically, five generation models with variations in encoder, decoder, intermediate layer, input resolution, and compression ratio have been used to generate a total of 6,450 Deepfake videos.
We take Deepfakes model attribution as a multiclass classification task and propose a spatial and temporal attention based method to explore the differences among Deepfakes.
arXiv Detail & Related papers (2022-02-25T20:05:18Z) - M2TR: Multi-modal Multi-scale Transformers for Deepfake Detection [74.19291916812921]
forged images generated by Deepfake techniques pose a serious threat to the trustworthiness of digital information.
In this paper, we aim to capture the subtle manipulation artifacts at different scales for Deepfake detection.
We introduce a high-quality Deepfake dataset, SR-DF, which consists of 4,000 DeepFake videos generated by state-of-the-art face swapping and facial reenactment methods.
arXiv Detail & Related papers (2021-04-20T05:43:44Z) - Where Do Deep Fakes Look? Synthetic Face Detection via Gaze Tracking [8.473714899301601]
We propose several prominent eye and gaze features that deep fakes exhibit differently.
Second, we compile those features into signatures and analyze and compare those of real and fake videos.
Third, we generalize this formulation to deep fake detection problem by a deep neural network.
arXiv Detail & Related papers (2021-01-04T18:54:46Z) - How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via
Interpreting Residuals with Biological Signals [9.918684475252636]
We propose an approach not only to separate deep fakes from real, but also to discover the specific generative model behind a deep fake.
Our results indicate that our approach can detect fake videos with 97.29% accuracy, and the source model with 93.39% accuracy.
arXiv Detail & Related papers (2020-08-26T03:35:47Z) - Emotions Don't Lie: An Audio-Visual Deepfake Detection Method Using
Affective Cues [75.1731999380562]
We present a learning-based method for detecting real and fake deepfake multimedia content.
We extract and analyze the similarity between the two audio and visual modalities from within the same video.
We compare our approach with several SOTA deepfake detection methods and report per-video AUC of 84.4% on the DFDC and 96.6% on the DF-TIMIT datasets.
arXiv Detail & Related papers (2020-03-14T22:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.