"Violation of my body:" Perceptions of AI-generated non-consensual (intimate) imagery
- URL: http://arxiv.org/abs/2406.05520v2
- Date: Sun, 16 Jun 2024 04:23:09 GMT
- Title: "Violation of my body:" Perceptions of AI-generated non-consensual (intimate) imagery
- Authors: Natalie Grace Brigham, Miranda Wei, Tadayoshi Kohno, Elissa M. Redmiles,
- Abstract summary: AI technology has enabled the creation of deepfakes: hyper-realistic synthetic media.
We surveyed 315 individuals in the U.S. on their views regarding the hypothetical non-consensual creation of deepfakes depicting them.
- Score: 22.68931586977199
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: AI technology has enabled the creation of deepfakes: hyper-realistic synthetic media. We surveyed 315 individuals in the U.S. on their views regarding the hypothetical non-consensual creation of deepfakes depicting them, including deepfakes portraying sexual acts. Respondents indicated strong opposition to creating and, even more so, sharing non-consensually created synthetic content, especially if that content depicts a sexual act. However, seeking out such content appeared more acceptable to some respondents. Attitudes around acceptability varied further based on the hypothetical creator's relationship to the participant, the respondent's gender and their attitudes towards sexual consent. This study provides initial insight into public perspectives of a growing threat and highlights the need for further research to inform social norms as well as ongoing policy conversations and technical developments in generative AI.
Related papers
- Unmasking Illusions: Understanding Human Perception of Audiovisual Deepfakes [49.81915942821647]
This paper aims to evaluate the human ability to discern deepfake videos through a subjective study.
We present our findings by comparing human observers to five state-ofthe-art audiovisual deepfake detection models.
We found that all AI models performed better than humans when evaluated on the same 40 videos.
arXiv Detail & Related papers (2024-05-07T07:57:15Z) - "Did They F***ing Consent to That?": Safer Digital Intimacy via Proactive Protection Against Image-Based Sexual Abuse [12.424265801615322]
8 in 10 adults share intimate content such as nude or lewd images.
Stigmatizing attitudes and a lack of technological mitigations put those sharing such content at risk of sexual violence.
arXiv Detail & Related papers (2024-03-07T17:04:55Z) - Non-Consensual Synthetic Intimate Imagery: Prevalence, Attitudes, and
Knowledge in 10 Countries [0.0]
Deepfake technologies have become ubiquitous, "democratizing" the ability to manipulate photos and videos.
One popular use of deepfake technology is the creation of sexually explicit content, which can then be posted and shared widely on the internet.
This article examines attitudes and behaviors related to "deepfake pornography" as a specific form of non-consensual synthetic intimate imagery (NSII)
arXiv Detail & Related papers (2024-01-26T21:51:49Z) - The Age of Synthetic Realities: Challenges and Opportunities [85.058932103181]
We highlight the crucial need for the development of forensic techniques capable of identifying harmful synthetic creations and distinguishing them from reality.
Our focus extends to various forms of media, such as images, videos, audio, and text, as we examine how synthetic realities are crafted and explore approaches to detecting these malicious creations.
This study is of paramount importance due to the rapid progress of AI generative techniques and their impact on the fundamental principles of Forensic Science.
arXiv Detail & Related papers (2023-06-09T15:55:10Z) - DeepfakeArt Challenge: A Benchmark Dataset for Generative AI Art Forgery and Data Poisoning Detection [57.51313366337142]
There has been growing concern over the use of generative AI for malicious purposes.
In the realm of visual content synthesis using generative AI, key areas of significant concern has been image forgery and data poisoning.
We introduce the DeepfakeArt Challenge, a large-scale challenge benchmark dataset designed specifically to aid in the building of machine learning algorithms for generative AI art forgery and data poisoning detection.
arXiv Detail & Related papers (2023-06-02T05:11:27Z) - Stereotypes and Smut: The (Mis)representation of Non-cisgender
Identities by Text-to-Image Models [6.92043136971035]
We investigate how multimodal models handle diverse gender identities.
We find certain non-cisgender identities are consistently (mis)represented as less human, more stereotyped and more sexualised.
These improvements could pave the way for a future where change is led by the affected community.
arXiv Detail & Related papers (2023-05-26T16:28:49Z) - Exploring outlooks towards generative AI-based assistive technologies
for people with Autism [2.5382095320488665]
We examined Reddit conversations regarding Nvdia's new videoconferencing feature which allows participants to maintain eye contact during online meetings.
We found 162 relevant comments discussing the relevance and appropriateness of the technology for people with Autism.
We suggest that developing generative AI-based assistive solutions will have ramifications for human-computer interaction.
arXiv Detail & Related papers (2023-05-16T21:39:38Z) - Can Workers Meaningfully Consent to Workplace Wellbeing Technologies? [65.15780777033109]
This paper unpacks the challenges workers face when consenting to workplace wellbeing technologies.
We show how workers are vulnerable to "meaningless" consent as they may be subject to power dynamics that minimize their ability to withhold consent.
To meaningfully consent, participants wanted changes to the technology and to the policies and practices surrounding the technology.
arXiv Detail & Related papers (2023-03-13T16:15:07Z) - How well can Text-to-Image Generative Models understand Ethical Natural
Language Interventions? [67.97752431429865]
We study the effect on the diversity of the generated images when adding ethical intervention.
Preliminary studies indicate that a large change in the model predictions is triggered by certain phrases such as 'irrespective of gender'
arXiv Detail & Related papers (2022-10-27T07:32:39Z) - How Would The Viewer Feel? Estimating Wellbeing From Video Scenarios [73.24092762346095]
We introduce two large-scale datasets with over 60,000 videos annotated for emotional response and subjective wellbeing.
The Video Cognitive Empathy dataset contains annotations for distributions of fine-grained emotional responses, allowing models to gain a detailed understanding of affective states.
The Video to Valence dataset contains annotations of relative pleasantness between videos, which enables predicting a continuous spectrum of wellbeing.
arXiv Detail & Related papers (2022-10-18T17:58:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.