How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via
Interpreting Residuals with Biological Signals
- URL: http://arxiv.org/abs/2008.11363v1
- Date: Wed, 26 Aug 2020 03:35:47 GMT
- Title: How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via
Interpreting Residuals with Biological Signals
- Authors: Umur Aybars Ciftci and Ilke Demir and Lijun Yin
- Abstract summary: We propose an approach not only to separate deep fakes from real, but also to discover the specific generative model behind a deep fake.
Our results indicate that our approach can detect fake videos with 97.29% accuracy, and the source model with 93.39% accuracy.
- Score: 9.918684475252636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fake portrait video generation techniques have been posing a new threat to
the society with photorealistic deep fakes for political propaganda, celebrity
imitation, forged evidences, and other identity related manipulations.
Following these generation techniques, some detection approaches have also been
proved useful due to their high classification accuracy. Nevertheless, almost
no effort was spent to track down the source of deep fakes. We propose an
approach not only to separate deep fakes from real videos, but also to discover
the specific generative model behind a deep fake. Some pure deep learning based
approaches try to classify deep fakes using CNNs where they actually learn the
residuals of the generator. We believe that these residuals contain more
information and we can reveal these manipulation artifacts by disentangling
them with biological signals. Our key observation yields that the
spatiotemporal patterns in biological signals can be conceived as a
representative projection of residuals. To justify this observation, we extract
PPG cells from real and fake videos and feed these to a state-of-the-art
classification network for detecting the generative model per video. Our
results indicate that our approach can detect fake videos with 97.29% accuracy,
and the source model with 93.39% accuracy.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - How Do Deepfakes Move? Motion Magnification for Deepfake Source
Detection [4.567475511774088]
We build a generalized deepfake source detector based on sub-muscular motion in faces.
Our approach exploits the difference between real motion and the amplified GAN fingerprints.
We evaluate our approach on two multi-source datasets.
arXiv Detail & Related papers (2022-12-28T18:59:21Z) - Detecting Deepfake by Creating Spatio-Temporal Regularity Disruption [94.5031244215761]
We propose to boost the generalization of deepfake detection by distinguishing the "regularity disruption" that does not appear in real videos.
Specifically, by carefully examining the spatial and temporal properties, we propose to disrupt a real video through a Pseudo-fake Generator.
Such practice allows us to achieve deepfake detection without using fake videos and improves the generalization ability in a simple and efficient manner.
arXiv Detail & Related papers (2022-07-21T10:42:34Z) - Deepfake Caricatures: Amplifying attention to artifacts increases
deepfake detection by humans and machines [17.7858728343141]
Deepfakes pose a serious threat to digital well-being by fueling misinformation.
We introduce a framework for amplifying artifacts in deepfake videos to make them more detectable by people.
We propose a novel, semi-supervised Artifact Attention module, which is trained on human responses to create attention maps that highlight video artifacts.
arXiv Detail & Related papers (2022-06-01T14:43:49Z) - A Survey of Deep Fake Detection for Trial Courts [2.320417845168326]
DeepFake algorithms can create fake images and videos that humans cannot distinguish from authentic ones.
It is become essential to detect fake videos to avoid spreading false information.
This paper presents a survey of methods used to detect DeepFakes and datasets available for detecting DeepFakes.
arXiv Detail & Related papers (2022-05-31T13:50:25Z) - Watch Those Words: Video Falsification Detection Using Word-Conditioned
Facial Motion [82.06128362686445]
We propose a multi-modal semantic forensic approach to handle both cheapfakes and visually persuasive deepfakes.
We leverage the idea of attribution to learn person-specific biometric patterns that distinguish a given speaker from others.
Unlike existing person-specific approaches, our method is also effective against attacks that focus on lip manipulation.
arXiv Detail & Related papers (2021-12-21T01:57:04Z) - Adversarially robust deepfake media detection using fused convolutional
neural network predictions [79.00202519223662]
Current deepfake detection systems struggle against unseen data.
We employ three different deep Convolutional Neural Network (CNN) models to classify fake and real images extracted from videos.
The proposed technique outperforms state-of-the-art models with 96.5% accuracy.
arXiv Detail & Related papers (2021-02-11T11:28:00Z) - Where Do Deep Fakes Look? Synthetic Face Detection via Gaze Tracking [8.473714899301601]
We propose several prominent eye and gaze features that deep fakes exhibit differently.
Second, we compile those features into signatures and analyze and compare those of real and fake videos.
Third, we generalize this formulation to deep fake detection problem by a deep neural network.
arXiv Detail & Related papers (2021-01-04T18:54:46Z) - Identity-Driven DeepFake Detection [91.0504621868628]
Identity-Driven DeepFake Detection takes as input the suspect image/video as well as the target identity information.
We output a decision on whether the identity in the suspect image/video is the same as the target identity.
We present a simple identity-based detection algorithm called the OuterFace, which may serve as a baseline for further research.
arXiv Detail & Related papers (2020-12-07T18:59:08Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z) - Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to
Adversarial Examples [23.695497512694068]
Recent advances in video manipulation techniques have made the generation of fake videos more accessible than ever before.
Manipulated videos can fuel disinformation and reduce trust in media.
Recent developed Deepfake detection methods rely on deep neural networks (DNNs) to distinguish AI-generated fake videos from real videos.
arXiv Detail & Related papers (2020-02-09T07:10:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.