"Violation of my body:" Perceptions of AI-generated non-consensual (intimate) imagery
- URL: http://arxiv.org/abs/2406.05520v2
- Date: Sun, 16 Jun 2024 04:23:09 GMT
- Title: "Violation of my body:" Perceptions of AI-generated non-consensual (intimate) imagery
- Authors: Natalie Grace Brigham, Miranda Wei, Tadayoshi Kohno, Elissa M. Redmiles,
- Abstract summary: AI technology has enabled the creation of deepfakes: hyper-realistic synthetic media.
We surveyed 315 individuals in the U.S. on their views regarding the hypothetical non-consensual creation of deepfakes depicting them.
- Score: 22.68931586977199
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: AI technology has enabled the creation of deepfakes: hyper-realistic synthetic media. We surveyed 315 individuals in the U.S. on their views regarding the hypothetical non-consensual creation of deepfakes depicting them, including deepfakes portraying sexual acts. Respondents indicated strong opposition to creating and, even more so, sharing non-consensually created synthetic content, especially if that content depicts a sexual act. However, seeking out such content appeared more acceptable to some respondents. Attitudes around acceptability varied further based on the hypothetical creator's relationship to the participant, the respondent's gender and their attitudes towards sexual consent. This study provides initial insight into public perspectives of a growing threat and highlights the need for further research to inform social norms as well as ongoing policy conversations and technical developments in generative AI.
Related papers
- Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - Deepfake Media Forensics: State of the Art and Challenges Ahead [51.33414186878676]
AI-generated synthetic media, also called Deepfakes, have influenced so many domains, from entertainment to cybersecurity.
Deepfake detection has become a vital area of research, focusing on identifying subtle inconsistencies and artifacts with machine learning techniques.
This paper reviews the primary algorithms that address these challenges, examining their advantages, limitations, and future prospects.
arXiv Detail & Related papers (2024-08-01T08:57:47Z) - She Works, He Works: A Curious Exploration of Gender Bias in AI-Generated Imagery [0.0]
This paper examines gender bias in AI-generated imagery of construction workers, highlighting discrepancies in the portrayal of male and female figures.
Grounded in Griselda Pollock's theories on visual culture and gender, the analysis reveals that AI models tend to sexualize female figures while portraying male figures as more authoritative and competent.
arXiv Detail & Related papers (2024-07-26T05:56:18Z) - Unmasking Illusions: Understanding Human Perception of Audiovisual Deepfakes [49.81915942821647]
This paper aims to evaluate the human ability to discern deepfake videos through a subjective study.
We present our findings by comparing human observers to five state-ofthe-art audiovisual deepfake detection models.
We found that all AI models performed better than humans when evaluated on the same 40 videos.
arXiv Detail & Related papers (2024-05-07T07:57:15Z) - "Did They F***ing Consent to That?": Safer Digital Intimacy via Proactive Protection Against Image-Based Sexual Abuse [12.424265801615322]
8 in 10 adults share intimate content such as nude or lewd images.
Stigmatizing attitudes and a lack of technological mitigations put those sharing such content at risk of sexual violence.
arXiv Detail & Related papers (2024-03-07T17:04:55Z) - Non-Consensual Synthetic Intimate Imagery: Prevalence, Attitudes, and
Knowledge in 10 Countries [0.0]
Deepfake technologies have become ubiquitous, "democratizing" the ability to manipulate photos and videos.
One popular use of deepfake technology is the creation of sexually explicit content, which can then be posted and shared widely on the internet.
This article examines attitudes and behaviors related to "deepfake pornography" as a specific form of non-consensual synthetic intimate imagery (NSII)
arXiv Detail & Related papers (2024-01-26T21:51:49Z) - Exploring outlooks towards generative AI-based assistive technologies
for people with Autism [2.5382095320488665]
We examined Reddit conversations regarding Nvdia's new videoconferencing feature which allows participants to maintain eye contact during online meetings.
We found 162 relevant comments discussing the relevance and appropriateness of the technology for people with Autism.
We suggest that developing generative AI-based assistive solutions will have ramifications for human-computer interaction.
arXiv Detail & Related papers (2023-05-16T21:39:38Z) - Can Workers Meaningfully Consent to Workplace Wellbeing Technologies? [65.15780777033109]
This paper unpacks the challenges workers face when consenting to workplace wellbeing technologies.
We show how workers are vulnerable to "meaningless" consent as they may be subject to power dynamics that minimize their ability to withhold consent.
To meaningfully consent, participants wanted changes to the technology and to the policies and practices surrounding the technology.
arXiv Detail & Related papers (2023-03-13T16:15:07Z) - How well can Text-to-Image Generative Models understand Ethical Natural
Language Interventions? [67.97752431429865]
We study the effect on the diversity of the generated images when adding ethical intervention.
Preliminary studies indicate that a large change in the model predictions is triggered by certain phrases such as 'irrespective of gender'
arXiv Detail & Related papers (2022-10-27T07:32:39Z) - How Would The Viewer Feel? Estimating Wellbeing From Video Scenarios [73.24092762346095]
We introduce two large-scale datasets with over 60,000 videos annotated for emotional response and subjective wellbeing.
The Video Cognitive Empathy dataset contains annotations for distributions of fine-grained emotional responses, allowing models to gain a detailed understanding of affective states.
The Video to Valence dataset contains annotations of relative pleasantness between videos, which enables predicting a continuous spectrum of wellbeing.
arXiv Detail & Related papers (2022-10-18T17:58:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.