"Did They F***ing Consent to That?": Safer Digital Intimacy via Proactive Protection Against Image-Based Sexual Abuse
- URL: http://arxiv.org/abs/2403.04659v2
- Date: Fri, 14 Jun 2024 00:56:24 GMT
- Title: "Did They F***ing Consent to That?": Safer Digital Intimacy via Proactive Protection Against Image-Based Sexual Abuse
- Authors: Lucy Qin, Vaughn Hamilton, Sharon Wang, Yigit Aydinalp, Marin Scarlett, Elissa M. Redmiles,
- Abstract summary: 8 in 10 adults share intimate content such as nude or lewd images.
Stigmatizing attitudes and a lack of technological mitigations put those sharing such content at risk of sexual violence.
- Score: 12.424265801615322
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As many as 8 in 10 adults share intimate content such as nude or lewd images. Sharing such content has significant benefits for relationship intimacy and body image, and can offer employment. However, stigmatizing attitudes and a lack of technological mitigations put those sharing such content at risk of sexual violence. An estimated 1 in 3 people have been subjected to image-based sexual abuse (IBSA), a spectrum of violence that includes the nonconsensual distribution or threat of distribution of consensually-created intimate content (also called NDII). In this work, we conducted a rigorous empirical interview study of 52 European creators of intimate content to examine the threats they face and how they defend against them, situated in the context of their different use cases for intimate content sharing and their choice of technologies for storing and sharing such content. Synthesizing our results with the limited body of prior work on technological prevention of NDII, we offer concrete next steps for both platforms and security & privacy researchers to work toward safer intimate content sharing through proactive protection. Content Warning: This work discusses sexual violence, specifically, the harms of image-based sexual abuse (particularly in Sections 2 and 6).
Related papers
- SAFREE: Training-Free and Adaptive Guard for Safe Text-to-Image And Video Generation [65.30207993362595]
Unlearning/editing-based methods for safe generation remove harmful concepts from models but face several challenges.
We propose SAFREE, a training-free approach for safe T2I and T2V.
We detect a subspace corresponding to a set of toxic concepts in the text embedding space and steer prompt embeddings away from this subspace.
arXiv Detail & Related papers (2024-10-16T17:32:23Z) - Understanding Help-Seeking and Help-Giving on Social Media for Image-Based Sexual Abuse [28.586678492600864]
Image-based sexual abuse (IBSA) is a growing threat to people's digital safety.
In this paper, we explore how people seek and receive help for IBSA on social media.
arXiv Detail & Related papers (2024-06-18T00:23:00Z) - Safer Digital Intimacy For Sex Workers And Beyond: A Technical Research Agenda [21.70034795348216]
Many people engage in digital intimacy: sex workers, their clients, and people who create and share intimate content recreationally.
With this intimacy comes significant security and privacy risk, exacerbated by stigma.
In this article, we present a commercial digital intimacy threat model and 10 research directions for safer digital intimacy.
arXiv Detail & Related papers (2024-03-15T21:16:01Z) - Non-Consensual Synthetic Intimate Imagery: Prevalence, Attitudes, and
Knowledge in 10 Countries [0.0]
Deepfake technologies have become ubiquitous, "democratizing" the ability to manipulate photos and videos.
One popular use of deepfake technology is the creation of sexually explicit content, which can then be posted and shared widely on the internet.
This article examines attitudes and behaviors related to "deepfake pornography" as a specific form of non-consensual synthetic intimate imagery (NSII)
arXiv Detail & Related papers (2024-01-26T21:51:49Z) - A deep-learning approach to early identification of suggested sexual
harassment from videos [0.802904964931021]
Sexual harassment, sexual abuse, and sexual violence are prevalent problems in this day and age.
We have classified the three terms (harassment, abuse, and violence) based on the visual attributes present in images depicting these situations.
We identified that factors such as facial expression of the victim and perpetrator and unwanted touching had a direct link to identifying the scenes.
Based on these definitions and characteristics, we have developed a first-of-its-kind dataset from various Indian movie scenes.
arXiv Detail & Related papers (2023-06-01T16:14:17Z) - Can Workers Meaningfully Consent to Workplace Wellbeing Technologies? [65.15780777033109]
This paper unpacks the challenges workers face when consenting to workplace wellbeing technologies.
We show how workers are vulnerable to "meaningless" consent as they may be subject to power dynamics that minimize their ability to withhold consent.
To meaningfully consent, participants wanted changes to the technology and to the policies and practices surrounding the technology.
arXiv Detail & Related papers (2023-03-13T16:15:07Z) - Privacy-Preserving Image Acquisition Using Trainable Optical Kernel [50.1239616836174]
We propose a trainable image acquisition method that removes the sensitive identity revealing information in the optical domain before it reaches the image sensor.
As the sensitive content is suppressed before it reaches the image sensor, it does not enter the digital domain therefore is unretrievable by any sort of privacy attack.
arXiv Detail & Related papers (2021-06-28T11:08:14Z) - Reporting Revenge Porn: a Preliminary Expert Analysis [0.0]
We present a preliminary expert analysis of the process for reporting revenge porn abuses in selected content sharing platforms.
Among these, we included social networks, image hosting websites, video hosting platforms, forums, and pornographic sites.
arXiv Detail & Related papers (2021-06-23T08:08:59Z) - CelebA-Spoof: Large-Scale Face Anti-Spoofing Dataset with Rich
Annotations [85.14435479181894]
CelebA-Spoof is a large-scale face anti-spoofing dataset.
It includes 625,537 pictures of 10,177 subjects, significantly larger than the existing datasets.
It contains 10 spoof type annotations, as well as the 40 attribute annotations inherited from the original CelebA dataset.
arXiv Detail & Related papers (2020-07-24T04:28:29Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.