FakeOut: Leveraging Out-of-domain Self-supervision for Multi-modal Video
Deepfake Detection
- URL: http://arxiv.org/abs/2212.00773v2
- Date: Wed, 7 Feb 2024 22:55:29 GMT
- Title: FakeOut: Leveraging Out-of-domain Self-supervision for Multi-modal Video
Deepfake Detection
- Authors: Gil Knafo and Ohad Fried
- Abstract summary: Synthetic videos of speaking humans can be used to spread misinformation in a convincing manner.
FakeOut is a novel approach that relies on multi-modal data throughout both the pre-training phase and the adaption phase.
Our method achieves state-of-the-art results in cross-dataset generalization on audio-visual datasets.
- Score: 10.36919027402249
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Video synthesis methods rapidly improved in recent years, allowing easy
creation of synthetic humans. This poses a problem, especially in the era of
social media, as synthetic videos of speaking humans can be used to spread
misinformation in a convincing manner. Thus, there is a pressing need for
accurate and robust deepfake detection methods, that can detect forgery
techniques not seen during training. In this work, we explore whether this can
be done by leveraging a multi-modal, out-of-domain backbone trained in a
self-supervised manner, adapted to the video deepfake domain. We propose
FakeOut; a novel approach that relies on multi-modal data throughout both the
pre-training phase and the adaption phase. We demonstrate the efficacy and
robustness of FakeOut in detecting various types of deepfakes, especially
manipulations which were not seen during training. Our method achieves
state-of-the-art results in cross-dataset generalization on audio-visual
datasets. This study shows that, perhaps surprisingly, training on
out-of-domain videos (i.e., not especially featuring speaking humans), can lead
to better deepfake detection systems. Code is available on GitHub.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - Unmasking Illusions: Understanding Human Perception of Audiovisual Deepfakes [49.81915942821647]
This paper aims to evaluate the human ability to discern deepfake videos through a subjective study.
We present our findings by comparing human observers to five state-ofthe-art audiovisual deepfake detection models.
We found that all AI models performed better than humans when evaluated on the same 40 videos.
arXiv Detail & Related papers (2024-05-07T07:57:15Z) - AVTENet: Audio-Visual Transformer-based Ensemble Network Exploiting
Multiple Experts for Video Deepfake Detection [53.448283629898214]
The recent proliferation of hyper-realistic deepfake videos has drawn attention to the threat of audio and visual forgeries.
Most previous work on detecting AI-generated fake videos only utilize visual modality or audio modality.
We propose an Audio-Visual Transformer-based Ensemble Network (AVTENet) framework that considers both acoustic manipulation and visual manipulation.
arXiv Detail & Related papers (2023-10-19T19:01:26Z) - Undercover Deepfakes: Detecting Fake Segments in Videos [1.2609216345578933]
deepfake generation is a new paradigm of deepfakes which are mostly real videos altered slightly to distort the truth.
In this paper, we present a deepfake detection method that can address this issue by performing deepfake prediction at the frame and video levels.
In particular, the paradigm we address will form a powerful tool for the moderation of deepfakes, where human oversight can be better targeted to the parts of videos suspected of being deepfakes.
arXiv Detail & Related papers (2023-05-11T04:43:10Z) - Audio-Visual Person-of-Interest DeepFake Detection [77.04789677645682]
The aim of this work is to propose a deepfake detector that can cope with the wide variety of manipulation methods and scenarios encountered in the real world.
We leverage a contrastive learning paradigm to learn the moving-face and audio segment embeddings that are most discriminative for each identity.
Our method can detect both single-modality (audio-only, video-only) and multi-modality (audio-video) attacks, and is robust to low-quality or corrupted videos.
arXiv Detail & Related papers (2022-04-06T20:51:40Z) - Self-supervised Transformer for Deepfake Detection [112.81127845409002]
Deepfake techniques in real-world scenarios require stronger generalization abilities of face forgery detectors.
Inspired by transfer learning, neural networks pre-trained on other large-scale face-related tasks may provide useful features for deepfake detection.
In this paper, we propose a self-supervised transformer based audio-visual contrastive learning method.
arXiv Detail & Related papers (2022-03-02T17:44:40Z) - Evaluation of an Audio-Video Multimodal Deepfake Dataset using Unimodal
and Multimodal Detectors [18.862258543488355]
Deepfakes can cause security and privacy issues.
New domain of cloning human voices using deep-learning technologies is also emerging.
To develop a good deepfake detector, we need a detector that can detect deepfakes of multiple modalities.
arXiv Detail & Related papers (2021-09-07T11:00:20Z) - A Convolutional LSTM based Residual Network for Deepfake Video Detection [23.275080108063406]
We develop a Convolutional LSTM based Residual Network (CLRNet) to detect deepfake videos.
We also propose a transfer learning-based approach to generalize different deepfake methods.
arXiv Detail & Related papers (2020-09-16T05:57:06Z) - Emotions Don't Lie: An Audio-Visual Deepfake Detection Method Using
Affective Cues [75.1731999380562]
We present a learning-based method for detecting real and fake deepfake multimedia content.
We extract and analyze the similarity between the two audio and visual modalities from within the same video.
We compare our approach with several SOTA deepfake detection methods and report per-video AUC of 84.4% on the DFDC and 96.6% on the DF-TIMIT datasets.
arXiv Detail & Related papers (2020-03-14T22:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.