Audio Deepfake Perceptions in College Going Populations
- URL: http://arxiv.org/abs/2112.03351v1
- Date: Mon, 6 Dec 2021 20:53:41 GMT
- Title: Audio Deepfake Perceptions in College Going Populations
- Authors: Gabrielle Watson, Zahra Khanjani, Vandana P. Janeja
- Abstract summary: This study tries to assess audio deepfake perceptions among college students from different majors.
We also analyzed the results based on different aspects of: grade level, complexity of the grammar used in the audio clips, length of the audio clips, those who knew the term deepfakes and those who did not.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deepfake is content or material that is generated or manipulated using AI
methods, to pass off as real. There are four different deepfake types: audio,
video, image and text. In this research we focus on audio deepfakes and how
people perceive it. There are several audio deepfake generation frameworks, but
we chose MelGAN which is a non-autoregressive and fast audio deepfake
generating framework, requiring fewer parameters. This study tries to assess
audio deepfake perceptions among college students from different majors. This
study also answers the question of how their background and major can affect
their perception towards AI generated deepfakes. We also analyzed the results
based on different aspects of: grade level, complexity of the grammar used in
the audio clips, length of the audio clips, those who knew the term deepfakes
and those who did not, as well as the political angle. It is interesting that
the results show when an audio clip has a political connotation, it can affect
what people think about whether it is real or fake, even if the content is
fairly similar. This study also explores the question of how background and
major can affect perception towards deepfakes.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - Unmasking Illusions: Understanding Human Perception of Audiovisual Deepfakes [49.81915942821647]
This paper aims to evaluate the human ability to discern deepfake videos through a subjective study.
We present our findings by comparing human observers to five state-ofthe-art audiovisual deepfake detection models.
We found that all AI models performed better than humans when evaluated on the same 40 videos.
arXiv Detail & Related papers (2024-05-07T07:57:15Z) - Deepfake CAPTCHA: A Method for Preventing Fake Calls [5.810459869589559]
We propose D-CAPTCHA: an active defense against real-time deepfakes.
The approach is to force the adversary into the spotlight by challenging the deepfake model to generate content which exceeds its capabilities.
In contrast to existing CAPTCHAs, we challenge the AI's ability to create content as opposed to its ability to classify content.
arXiv Detail & Related papers (2023-01-08T15:34:19Z) - SceneFake: An Initial Dataset and Benchmarks for Scene Fake Audio Detection [54.74467470358476]
This paper proposes a dataset for scene fake audio detection named SceneFake.
A manipulated audio is generated by only tampering with the acoustic scene of an original audio.
Some scene fake audio detection benchmark results on the SceneFake dataset are reported in this paper.
arXiv Detail & Related papers (2022-11-11T09:05:50Z) - Audio-Visual Person-of-Interest DeepFake Detection [77.04789677645682]
The aim of this work is to propose a deepfake detector that can cope with the wide variety of manipulation methods and scenarios encountered in the real world.
We leverage a contrastive learning paradigm to learn the moving-face and audio segment embeddings that are most discriminative for each identity.
Our method can detect both single-modality (audio-only, video-only) and multi-modality (audio-video) attacks, and is robust to low-quality or corrupted videos.
arXiv Detail & Related papers (2022-04-06T20:51:40Z) - Human Detection of Political Speech Deepfakes across Transcripts, Audio,
and Video [4.78385214366452]
Recent advances in technology for hyper-realistic visual and audio effects provoke the concern that deepfake videos of political speeches will soon be indistinguishable from authentic video recordings.
We conduct 5 pre-registered randomized experiments with 2,215 participants to evaluate how accurately humans distinguish real political speeches from fabrications.
We find base rates of misinformation minimally influence discernment and deepfakes with audio produced by the state-of-the-art text-to-speech algorithms are harder to discern than the same deepfakes with voice actor audio.
arXiv Detail & Related papers (2022-02-25T18:47:32Z) - ADD 2022: the First Audio Deep Synthesis Detection Challenge [92.41777858637556]
The first Audio Deep synthesis Detection challenge (ADD) was motivated to fill in the gap.
The ADD 2022 includes three tracks: low-quality fake audio detection (LF), partially fake audio detection (PF) and audio fake game (FG)
arXiv Detail & Related papers (2022-02-17T03:29:20Z) - How Deep Are the Fakes? Focusing on Audio Deepfake: A Survey [0.0]
This paper critically analyzes and provides a unique source of audio deepfake research, mostly ranging from 2016 to 2020.
This survey provides readers with a summary of 1) different deepfake categories 2) how they could be created and detected 3) the most recent trends in this domain and shortcomings in detection methods.
We found that Generative Adversarial Networks(GAN), Convolutional Neural Networks (CNN), and Deep Neural Networks (DNN) are common ways of creating and detecting deepfakes.
arXiv Detail & Related papers (2021-11-28T18:28:30Z) - FakeAVCeleb: A Novel Audio-Video Multimodal Deepfake Dataset [21.199288324085444]
Recently, a new problem of generating cloned or synthesized human voice of a person is emerging.
With the emerging threat of impersonation attacks using deepfake videos and audios, new deepfake detectors are need that focuses on both, video and audio.
We propose a novel Audio-Video Deepfake dataset (FakeAVCeleb) that not only contains deepfake videos but respective synthesized cloned audios as well.
arXiv Detail & Related papers (2021-08-11T07:49:36Z) - WildDeepfake: A Challenging Real-World Dataset for Deepfake Detection [82.42495493102805]
We introduce a new dataset WildDeepfake which consists of 7,314 face sequences extracted from 707 deepfake videos collected completely from the internet.
We conduct a systematic evaluation of a set of baseline detection networks on both existing and our WildDeepfake datasets, and show that WildDeepfake is indeed a more challenging dataset, where the detection performance can decrease drastically.
arXiv Detail & Related papers (2021-01-05T11:10:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.