Can deepfakes be created by novice users?
- URL: http://arxiv.org/abs/2304.14576v1
- Date: Fri, 28 Apr 2023 00:32:24 GMT
- Title: Can deepfakes be created by novice users?
- Authors: Pulak Mehta, Gauri Jagatap, Kevin Gallagher, Brian Timmerman, Progga
Deb, Siddharth Garg, Rachel Greenstadt, Brendan Dolan-Gavitt
- Abstract summary: We conduct user studies to understand whether participants with advanced computer skills can create Deepfakes.
We find that 23.1% of the participants successfully created complete Deepfakes with audio and video.
We use Deepfake detection software tools as well as human examiner-based analysis, to classify the successfully generated Deepfake outputs as fake, suspicious, or real.
- Score: 15.014868583616504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in machine learning and computer vision have led to the
proliferation of Deepfakes. As technology democratizes over time, there is an
increasing fear that novice users can create Deepfakes, to discredit others and
undermine public discourse. In this paper, we conduct user studies to
understand whether participants with advanced computer skills and varying
levels of computer science expertise can create Deepfakes of a person saying a
target statement using limited media files. We conduct two studies; in the
first study (n = 39) participants try creating a target Deepfake in a
constrained time frame using any tool they desire. In the second study (n = 29)
participants use pre-specified deep learning-based tools to create the same
Deepfake. We find that for the first study, 23.1% of the participants
successfully created complete Deepfakes with audio and video, whereas, for the
second user study, 58.6% of the participants were successful in stitching
target speech to the target video. We further use Deepfake detection software
tools as well as human examiner-based analysis, to classify the successfully
generated Deepfake outputs as fake, suspicious, or real. The software detector
classified 80% of the Deepfakes as fake, whereas the human examiners classified
100% of the videos as fake. We conclude that creating Deepfakes is a simple
enough task for a novice user given adequate tools and time; however, the
resulting Deepfakes are not sufficiently real-looking and are unable to
completely fool detection software as well as human examiners
Related papers
- Understanding Audiovisual Deepfake Detection: Techniques, Challenges, Human Factors and Perceptual Insights [49.81915942821647]
Deep Learning has been successfully applied in diverse fields, and its impact on deepfake detection is no exception.
Deepfakes are fake yet realistic synthetic content that can be used deceitfully for political impersonation, phishing, slandering, or spreading misinformation.
This paper aims to improve the effectiveness of deepfake detection strategies and guide future research in cybersecurity and media integrity.
arXiv Detail & Related papers (2024-11-12T09:02:11Z) - Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - DF40: Toward Next-Generation Deepfake Detection [62.073997142001424]
existing works identify top-notch detection algorithms and models by adhering to the common practice: training detectors on one specific dataset and testing them on other prevalent deepfake datasets.
But can these stand-out "winners" be truly applied to tackle the myriad of realistic and diverse deepfakes lurking in the real world?
We construct a highly diverse deepfake detection dataset called DF40, which comprises 40 distinct deepfake techniques.
arXiv Detail & Related papers (2024-06-19T12:35:02Z) - Unmasking Illusions: Understanding Human Perception of Audiovisual Deepfakes [49.81915942821647]
This paper aims to evaluate the human ability to discern deepfake videos through a subjective study.
We present our findings by comparing human observers to five state-ofthe-art audiovisual deepfake detection models.
We found that all AI models performed better than humans when evaluated on the same 40 videos.
arXiv Detail & Related papers (2024-05-07T07:57:15Z) - DeePhy: On Deepfake Phylogeny [58.01631614114075]
DeePhy is a novel Deepfake Phylogeny dataset which consists of 5040 deepfake videos generated using three different generation techniques.
We present the benchmark on DeePhy dataset using six deepfake detection algorithms.
arXiv Detail & Related papers (2022-09-19T15:30:33Z) - Using Deep Learning to Detecting Deepfakes [0.0]
Deepfakes are videos or images that replace one persons face with another computer-generated face, often a more recognizable person in society.
To combat this online threat, researchers have developed models that are designed to detect deepfakes.
This study looks at various deepfake detection models that use deep learning algorithms to combat this looming threat.
arXiv Detail & Related papers (2022-07-27T17:05:16Z) - Deepfake Caricatures: Amplifying attention to artifacts increases
deepfake detection by humans and machines [17.7858728343141]
Deepfakes pose a serious threat to digital well-being by fueling misinformation.
We introduce a framework for amplifying artifacts in deepfake videos to make them more detectable by people.
We propose a novel, semi-supervised Artifact Attention module, which is trained on human responses to create attention maps that highlight video artifacts.
arXiv Detail & Related papers (2022-06-01T14:43:49Z) - Human Perception of Audio Deepfakes [6.40753664615445]
We present results from comparing the abilities of humans and machines for detecting audio deepfakes.
In our experiment, 472 unique users competed against a state-of-the-art AI deepfake detection algorithm for 14912 total rounds of the game.
We find that humans and deepfake detection algorithms share similar strengths and weaknesses, both struggling to detect certain types of attacks.
arXiv Detail & Related papers (2021-07-20T09:19:42Z) - WildDeepfake: A Challenging Real-World Dataset for Deepfake Detection [82.42495493102805]
We introduce a new dataset WildDeepfake which consists of 7,314 face sequences extracted from 707 deepfake videos collected completely from the internet.
We conduct a systematic evaluation of a set of baseline detection networks on both existing and our WildDeepfake datasets, and show that WildDeepfake is indeed a more challenging dataset, where the detection performance can decrease drastically.
arXiv Detail & Related papers (2021-01-05T11:10:32Z) - Deepfake detection: humans vs. machines [4.485016243130348]
We present a subjective study conducted in a crowdsourcing-like scenario, which systematically evaluates how hard it is for humans to see if the video is deepfake or not.
For each video, a simple question: "Is face of the person in the video real of fake?" was answered on average by 19 na"ive subjects.
The evaluation demonstrates that while the human perception is very different from the perception of a machine, both successfully but in different ways are fooled by deepfakes.
arXiv Detail & Related papers (2020-09-07T15:20:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.