Watch Those Words: Video Falsification Detection Using Word-Conditioned
Facial Motion
- URL: http://arxiv.org/abs/2112.10936v1
- Date: Tue, 21 Dec 2021 01:57:04 GMT
- Title: Watch Those Words: Video Falsification Detection Using Word-Conditioned
Facial Motion
- Authors: Shruti Agarwal, Liwen Hu, Evonne Ng, Trevor Darrell, Hao Li, Anna
Rohrbach
- Abstract summary: We propose a multi-modal semantic forensic approach to handle both cheapfakes and visually persuasive deepfakes.
We leverage the idea of attribution to learn person-specific biometric patterns that distinguish a given speaker from others.
Unlike existing person-specific approaches, our method is also effective against attacks that focus on lip manipulation.
- Score: 82.06128362686445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In today's era of digital misinformation, we are increasingly faced with new
threats posed by video falsification techniques. Such falsifications range from
cheapfakes (e.g., lookalikes or audio dubbing) to deepfakes (e.g.,
sophisticated AI media synthesis methods), which are becoming perceptually
indistinguishable from real videos. To tackle this challenge, we propose a
multi-modal semantic forensic approach to discover clues that go beyond
detecting discrepancies in visual quality, thereby handling both simpler
cheapfakes and visually persuasive deepfakes. In this work, our goal is to
verify that the purported person seen in the video is indeed themselves by
detecting anomalous correspondences between their facial movements and the
words they are saying. We leverage the idea of attribution to learn
person-specific biometric patterns that distinguish a given speaker from
others. We use interpretable Action Units (AUs) to capture a persons' face and
head movement as opposed to deep CNN visual features, and we are the first to
use word-conditioned facial motion analysis. Unlike existing person-specific
approaches, our method is also effective against attacks that focus on lip
manipulation. We further demonstrate our method's effectiveness on a range of
fakes not seen in training including those without video manipulation, that
were not addressed in prior work.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - Unmasking Illusions: Understanding Human Perception of Audiovisual Deepfakes [49.81915942821647]
This paper aims to evaluate the human ability to discern deepfake videos through a subjective study.
We present our findings by comparing human observers to five state-ofthe-art audiovisual deepfake detection models.
We found that all AI models performed better than humans when evaluated on the same 40 videos.
arXiv Detail & Related papers (2024-05-07T07:57:15Z) - Audio-Visual Person-of-Interest DeepFake Detection [77.04789677645682]
The aim of this work is to propose a deepfake detector that can cope with the wide variety of manipulation methods and scenarios encountered in the real world.
We leverage a contrastive learning paradigm to learn the moving-face and audio segment embeddings that are most discriminative for each identity.
Our method can detect both single-modality (audio-only, video-only) and multi-modality (audio-video) attacks, and is robust to low-quality or corrupted videos.
arXiv Detail & Related papers (2022-04-06T20:51:40Z) - Leveraging Real Talking Faces via Self-Supervision for Robust Forgery
Detection [112.96004727646115]
We develop a method to detect face-manipulated videos using real talking faces.
We show that our method achieves state-of-the-art performance on cross-manipulation generalisation and robustness experiments.
Our results suggest that leveraging natural and unlabelled videos is a promising direction for the development of more robust face forgery detectors.
arXiv Detail & Related papers (2022-01-18T17:14:54Z) - Detecting Deepfake Videos Using Euler Video Magnification [1.8506048493564673]
Deepfake videos are manipulating videos using advanced machine learning techniques.
In this paper, we examine a technique for possible identification of deepfake videos.
Our approach uses features extracted from the Euler technique to train three models to classify counterfeit and unaltered videos.
arXiv Detail & Related papers (2021-01-27T17:37:23Z) - Lips Don't Lie: A Generalisable and Robust Approach to Face Forgery
Detection [118.37239586697139]
LipForensics is a detection approach capable of both generalising manipulations and withstanding various distortions.
It consists in first pretraining a-temporal network to perform visual speech recognition (lipreading)
A temporal network is subsequently finetuned on fixed mouth embeddings of real and forged data in order to detect fake videos based on mouth movements without over-fitting to low-level, manipulation-specific artefacts.
arXiv Detail & Related papers (2020-12-14T15:53:56Z) - How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via
Interpreting Residuals with Biological Signals [9.918684475252636]
We propose an approach not only to separate deep fakes from real, but also to discover the specific generative model behind a deep fake.
Our results indicate that our approach can detect fake videos with 97.29% accuracy, and the source model with 93.39% accuracy.
arXiv Detail & Related papers (2020-08-26T03:35:47Z) - Detecting Deep-Fake Videos from Appearance and Behavior [0.0]
We describe a biometric-based forensic technique for detecting face-swap deep fakes.
We show the efficacy of this approach across several large-scale video datasets.
arXiv Detail & Related papers (2020-04-29T21:38:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.