Behind the Deepfake: 8% Create; 90% Concerned. Surveying public exposure to and perceptions of deepfakes in the UK
- URL: http://arxiv.org/abs/2407.05529v1
- Date: Mon, 8 Jul 2024 00:22:51 GMT
- Title: Behind the Deepfake: 8% Create; 90% Concerned. Surveying public exposure to and perceptions of deepfakes in the UK
- Authors: Tvesha Sippy, Florence Enock, Jonathan Bright, Helen Z. Margetts,
- Abstract summary: This article examines public exposure to and perceptions of deepfakes based on insights from a nationally representative survey of 1403 UK adults.
On average, 15% report exposure to harmful deepfakes, including deepfake pornography, deepfake frauds/scams and other potentially harmful deepfakes.
While exposure to harmful deepfakes was relatively low, awareness of and fears about deepfakes were high.
Most respondents were concerned that deepfakes could add to online child sexual abuse material, increase distrust in information and manipulate public opinion.
- Score: 1.0228192660021962
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article examines public exposure to and perceptions of deepfakes based on insights from a nationally representative survey of 1403 UK adults. The survey is one of the first of its kind since recent improvements in deepfake technology and widespread adoption of political deepfakes. The findings reveal three key insights. First, on average, 15% of people report exposure to harmful deepfakes, including deepfake pornography, deepfake frauds/scams and other potentially harmful deepfakes such as those that spread health/religious misinformation/propaganda. In terms of common targets, exposure to deepfakes featuring celebrities was 50.2%, whereas those featuring politicians was 34.1%. And 5.7% of respondents recall exposure to a selection of high profile political deepfakes in the UK. Second, while exposure to harmful deepfakes was relatively low, awareness of and fears about deepfakes were high (and women were significantly more likely to report experiencing such fears than men). As with fears, general concerns about the spread of deepfakes were also high; 90.4% of the respondents were either very concerned or somewhat concerned about this issue. Most respondents (at least 91.8%) were concerned that deepfakes could add to online child sexual abuse material, increase distrust in information and manipulate public opinion. Third, while awareness about deepfakes was high, usage of deepfake tools was relatively low (8%). Most respondents were not confident about their detection abilities and were trustful of audiovisual content online. Our work highlights how the problem of deepfakes has become embedded in public consciousness in just a few years; it also highlights the need for media literacy programmes and other policy interventions to address the spread of harmful deepfakes.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - Unmasking Illusions: Understanding Human Perception of Audiovisual Deepfakes [49.81915942821647]
This paper aims to evaluate the human ability to discern deepfake videos through a subjective study.
We present our findings by comparing human observers to five state-ofthe-art audiovisual deepfake detection models.
We found that all AI models performed better than humans when evaluated on the same 40 videos.
arXiv Detail & Related papers (2024-05-07T07:57:15Z) - Detecting Deepfakes Without Seeing Any [43.113936505905336]
"fact checking" is adapted from fake news detection to detect zero-day deepfake attacks.
FACTOR is a recipe for deepfake fact checking and demonstrates its power in critical attack settings.
Although it is training-free, relies exclusively on off-the-shelf features, is very easy to implement, and does not see any deepfakes.
arXiv Detail & Related papers (2023-11-02T17:59:31Z) - Recent Advancements In The Field Of Deepfake Detection [0.0]
A deepfake is a photo or video of a person whose image has been digitally altered or partially replaced with an image of someone else.
Deepfakes have the potential to cause a variety of problems and are often used maliciously.
Our objective is to survey and analyze a variety of current methods and advances in the field of deepfake detection.
arXiv Detail & Related papers (2023-08-10T13:24:27Z) - Hybrid Deepfake Detection Utilizing MLP and LSTM [0.0]
A deepfake is an invention that has come with the latest technological advancements.
In this paper, we propose a new deepfake detection schema utilizing two deep learning algorithms.
We evaluate our model using a dataset named 140k Real and Fake Faces to detect images altered by a deepfake with accuracies achieved as high as 74.7%.
arXiv Detail & Related papers (2023-04-21T16:38:26Z) - DeePhy: On Deepfake Phylogeny [58.01631614114075]
DeePhy is a novel Deepfake Phylogeny dataset which consists of 5040 deepfake videos generated using three different generation techniques.
We present the benchmark on DeePhy dataset using six deepfake detection algorithms.
arXiv Detail & Related papers (2022-09-19T15:30:33Z) - Robust Deepfake On Unrestricted Media: Generation And Detection [46.576556314444865]
Recent advances in deep learning have led to substantial improvements in deepfake generation.
This chapter explores the evolution of and challenges in deepfake generation and detection.
arXiv Detail & Related papers (2022-02-13T06:53:39Z) - How Deep Are the Fakes? Focusing on Audio Deepfake: A Survey [0.0]
This paper critically analyzes and provides a unique source of audio deepfake research, mostly ranging from 2016 to 2020.
This survey provides readers with a summary of 1) different deepfake categories 2) how they could be created and detected 3) the most recent trends in this domain and shortcomings in detection methods.
We found that Generative Adversarial Networks(GAN), Convolutional Neural Networks (CNN), and Deep Neural Networks (DNN) are common ways of creating and detecting deepfakes.
arXiv Detail & Related papers (2021-11-28T18:28:30Z) - WildDeepfake: A Challenging Real-World Dataset for Deepfake Detection [82.42495493102805]
We introduce a new dataset WildDeepfake which consists of 7,314 face sequences extracted from 707 deepfake videos collected completely from the internet.
We conduct a systematic evaluation of a set of baseline detection networks on both existing and our WildDeepfake datasets, and show that WildDeepfake is indeed a more challenging dataset, where the detection performance can decrease drastically.
arXiv Detail & Related papers (2021-01-05T11:10:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.