Non-Consensual Synthetic Intimate Imagery: Prevalence, Attitudes, and
Knowledge in 10 Countries
- URL: http://arxiv.org/abs/2402.01721v2
- Date: Tue, 13 Feb 2024 22:26:23 GMT
- Title: Non-Consensual Synthetic Intimate Imagery: Prevalence, Attitudes, and
Knowledge in 10 Countries
- Authors: Rebecca Umbach, Nicola Henry, Gemma Beard, Colleen Berryessa
- Abstract summary: Deepfake technologies have become ubiquitous, "democratizing" the ability to manipulate photos and videos.
One popular use of deepfake technology is the creation of sexually explicit content, which can then be posted and shared widely on the internet.
This article examines attitudes and behaviors related to "deepfake pornography" as a specific form of non-consensual synthetic intimate imagery (NSII)
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deepfake technologies have become ubiquitous, "democratizing" the ability to
manipulate photos and videos. One popular use of deepfake technology is the
creation of sexually explicit content, which can then be posted and shared
widely on the internet. Drawing on a survey of over 16,000 respondents in 10
different countries, this article examines attitudes and behaviors related to
"deepfake pornography" as a specific form of non-consensual synthetic intimate
imagery (NSII). Our study found that deepfake pornography behaviors were
considered harmful by respondents, despite nascent societal awareness.
Regarding the prevalence of deepfake porn victimization and perpetration, 2.2%
of all respondents indicated personal victimization, and 1.8% all of
respondents indicated perpetration behaviors. Respondents from countries with
specific legislation still reported perpetration and victimization experiences,
suggesting NSII laws are inadequate to deter perpetration. Approaches to
prevent and reduce harms may include digital literacy education, as well as
enforced platform policies, practices, and tools which better detect, prevent,
and respond to NSII content.
Related papers
- Behind the Deepfake: 8% Create; 90% Concerned. Surveying public exposure to and perceptions of deepfakes in the UK [1.0228192660021962]
This article examines public exposure to and perceptions of deepfakes based on insights from a nationally representative survey of 1403 UK adults.
On average, 15% report exposure to harmful deepfakes, including deepfake pornography, deepfake frauds/scams and other potentially harmful deepfakes.
While exposure to harmful deepfakes was relatively low, awareness of and fears about deepfakes were high.
Most respondents were concerned that deepfakes could add to online child sexual abuse material, increase distrust in information and manipulate public opinion.
arXiv Detail & Related papers (2024-07-08T00:22:51Z) - "Violation of my body:" Perceptions of AI-generated non-consensual (intimate) imagery [22.68931586977199]
AI technology has enabled the creation of deepfakes: hyper-realistic synthetic media.
We surveyed 315 individuals in the U.S. on their views regarding the hypothetical non-consensual creation of deepfakes depicting them.
arXiv Detail & Related papers (2024-06-08T16:57:20Z) - "Did They F***ing Consent to That?": Safer Digital Intimacy via Proactive Protection Against Image-Based Sexual Abuse [12.424265801615322]
8 in 10 adults share intimate content such as nude or lewd images.
Stigmatizing attitudes and a lack of technological mitigations put those sharing such content at risk of sexual violence.
arXiv Detail & Related papers (2024-03-07T17:04:55Z) - Unveiling Local Patterns of Child Pornography Consumption in France
using Tor [0.6749750044497731]
We analyze local patterns of child pornography consumption across 1341 French communes in 20 metropolitan regions of France using fine-grained mobile traffic data of Tor network-related web services.
We estimate that approx. 0.08 % of Tor mobile download traffic observed in France is linked to the consumption of child sexual abuse materials by correlating it with local-level temporal porn consumption patterns.
arXiv Detail & Related papers (2023-10-17T09:31:26Z) - Tainted Love: A Systematic Review of Online Romance Fraud [68.8204255655161]
Romance fraud involves cybercriminals engineering a romantic relationship on online dating platforms.
We characterise the literary landscape on romance fraud, advancing the understanding of researchers and practitioners.
Three main contributions were identified: profiles of romance scams, countermeasures for mitigating romance scams, and factors that predispose an individual to become a scammer or a victim.
arXiv Detail & Related papers (2023-02-28T20:34:07Z) - Fighting Malicious Media Data: A Survey on Tampering Detection and
Deepfake Detection [115.83992775004043]
Recent advances in deep learning, particularly deep generative models, open the doors for producing perceptually convincing images and videos at a low cost.
This paper provides a comprehensive review of the current media tampering detection approaches, and discusses the challenges and trends in this field for future research.
arXiv Detail & Related papers (2022-12-12T02:54:08Z) - Reporting Revenge Porn: a Preliminary Expert Analysis [0.0]
We present a preliminary expert analysis of the process for reporting revenge porn abuses in selected content sharing platforms.
Among these, we included social networks, image hosting websites, video hosting platforms, forums, and pornographic sites.
arXiv Detail & Related papers (2021-06-23T08:08:59Z) - A Study of Face Obfuscation in ImageNet [94.2949777826947]
In this paper, we explore image obfuscation in the ImageNet challenge.
Most categories in the ImageNet challenge are not people categories; nevertheless, many incidental people are in the images.
We benchmark various deep neural networks on face-blurred images and observe a disparate impact on different categories.
Results show that features learned on face-blurred images are equally transferable.
arXiv Detail & Related papers (2021-03-10T17:11:34Z) - Investigating Bias in Deep Face Analysis: The KANFace Dataset and
Empirical Study [67.3961439193994]
We introduce the most comprehensive, large-scale dataset of facial images and videos to date.
The data are manually annotated in terms of identity, exact age, gender and kinship.
A method to debias network embeddings is introduced and tested on the proposed benchmarks.
arXiv Detail & Related papers (2020-05-15T00:14:39Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z) - Investigating the Impact of Inclusion in Face Recognition Training Data
on Individual Face Identification [93.5538147928669]
We audit ArcFace, a state-of-the-art, open source face recognition system, in a large-scale face identification experiment with more than one million distractor images.
We find a Rank-1 face identification accuracy of 79.71% for individuals present in the model's training data and an accuracy of 75.73% for those not present.
arXiv Detail & Related papers (2020-01-09T15:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.