Deepfake detection: humans vs. machines
- URL: http://arxiv.org/abs/2009.03155v1
- Date: Mon, 7 Sep 2020 15:20:37 GMT
- Title: Deepfake detection: humans vs. machines
- Authors: Pavel Korshunov and S\'ebastien Marcel
- Abstract summary: We present a subjective study conducted in a crowdsourcing-like scenario, which systematically evaluates how hard it is for humans to see if the video is deepfake or not.
For each video, a simple question: "Is face of the person in the video real of fake?" was answered on average by 19 na"ive subjects.
The evaluation demonstrates that while the human perception is very different from the perception of a machine, both successfully but in different ways are fooled by deepfakes.
- Score: 4.485016243130348
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deepfake videos, where a person's face is automatically swapped with a face
of someone else, are becoming easier to generate with more realistic results.
In response to the threat such manipulations can pose to our trust in video
evidence, several large datasets of deepfake videos and many methods to detect
them were proposed recently. However, it is still unclear how realistic
deepfake videos are for an average person and whether the algorithms are
significantly better than humans at detecting them. In this paper, we present a
subjective study conducted in a crowdsourcing-like scenario, which
systematically evaluates how hard it is for humans to see if the video is
deepfake or not. For the evaluation, we used 120 different videos (60 deepfakes
and 60 originals) manually pre-selected from the Facebook deepfake database,
which was provided in the Kaggle's Deepfake Detection Challenge 2020. For each
video, a simple question: "Is face of the person in the video real of fake?"
was answered on average by 19 na\"ive subjects. The results of the subjective
evaluation were compared with the performance of two different state of the art
deepfake detection methods, based on Xception and EfficientNets (B4 variant)
neural networks, which were pre-trained on two other large public databases:
the Google's subset from FaceForensics++ and the recent Celeb-DF dataset. The
evaluation demonstrates that while the human perception is very different from
the perception of a machine, both successfully but in different ways are fooled
by deepfakes. Specifically, algorithms struggle to detect those deepfake
videos, which human subjects found to be very easy to spot.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - Unmasking Illusions: Understanding Human Perception of Audiovisual Deepfakes [49.81915942821647]
This paper aims to evaluate the human ability to discern deepfake videos through a subjective study.
We present our findings by comparing human observers to five state-ofthe-art audiovisual deepfake detection models.
We found that all AI models performed better than humans when evaluated on the same 40 videos.
arXiv Detail & Related papers (2024-05-07T07:57:15Z) - DeePhy: On Deepfake Phylogeny [58.01631614114075]
DeePhy is a novel Deepfake Phylogeny dataset which consists of 5040 deepfake videos generated using three different generation techniques.
We present the benchmark on DeePhy dataset using six deepfake detection algorithms.
arXiv Detail & Related papers (2022-09-19T15:30:33Z) - Audio-Visual Person-of-Interest DeepFake Detection [77.04789677645682]
The aim of this work is to propose a deepfake detector that can cope with the wide variety of manipulation methods and scenarios encountered in the real world.
We leverage a contrastive learning paradigm to learn the moving-face and audio segment embeddings that are most discriminative for each identity.
Our method can detect both single-modality (audio-only, video-only) and multi-modality (audio-video) attacks, and is robust to low-quality or corrupted videos.
arXiv Detail & Related papers (2022-04-06T20:51:40Z) - Watch Those Words: Video Falsification Detection Using Word-Conditioned
Facial Motion [82.06128362686445]
We propose a multi-modal semantic forensic approach to handle both cheapfakes and visually persuasive deepfakes.
We leverage the idea of attribution to learn person-specific biometric patterns that distinguish a given speaker from others.
Unlike existing person-specific approaches, our method is also effective against attacks that focus on lip manipulation.
arXiv Detail & Related papers (2021-12-21T01:57:04Z) - Detecting Deepfake Videos Using Euler Video Magnification [1.8506048493564673]
Deepfake videos are manipulating videos using advanced machine learning techniques.
In this paper, we examine a technique for possible identification of deepfake videos.
Our approach uses features extracted from the Euler technique to train three models to classify counterfeit and unaltered videos.
arXiv Detail & Related papers (2021-01-27T17:37:23Z) - WildDeepfake: A Challenging Real-World Dataset for Deepfake Detection [82.42495493102805]
We introduce a new dataset WildDeepfake which consists of 7,314 face sequences extracted from 707 deepfake videos collected completely from the internet.
We conduct a systematic evaluation of a set of baseline detection networks on both existing and our WildDeepfake datasets, and show that WildDeepfake is indeed a more challenging dataset, where the detection performance can decrease drastically.
arXiv Detail & Related papers (2021-01-05T11:10:32Z) - A Convolutional LSTM based Residual Network for Deepfake Video Detection [23.275080108063406]
We develop a Convolutional LSTM based Residual Network (CLRNet) to detect deepfake videos.
We also propose a transfer learning-based approach to generalize different deepfake methods.
arXiv Detail & Related papers (2020-09-16T05:57:06Z) - Deepfake Video Forensics based on Transfer Learning [0.0]
"Deepfake" can create fake images and videos that humans cannot differentiate from the genuine ones.
This paper details retraining the image classification models to apprehend the features from each deepfake video frames.
When checking Deepfake videos, this technique received more than 87 per cent accuracy.
arXiv Detail & Related papers (2020-04-29T13:21:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.