Perpetuating Misogyny with Generative AI: How Model Personalization Normalizes Gendered Harm
- URL: http://arxiv.org/abs/2505.04600v2
- Date: Tue, 20 May 2025 08:13:33 GMT
- Title: Perpetuating Misogyny with Generative AI: How Model Personalization Normalizes Gendered Harm
- Authors: Laura Wagner, Eva Cetinic,
- Abstract summary: Open-source text-to-image (TTI) pipelines have become dominant in the landscape of AI-generated visual content.<n>This study presents an exploratory sociotechnical analysis of CivitAI, the most active platform for sharing and developing open-source TTI models.<n>We find a disproportionate rise in not-safe-for-work (NSFW) content and a significant number of models intended to mimic real individuals.
- Score: 0.8103046443444949
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Open-source text-to-image (TTI) pipelines have become dominant in the landscape of AI-generated visual content, driven by technological advances that enable users to personalize models through adapters tailored to specific tasks. While personalization methods such as LoRA offer unprecedented creative opportunities, they also facilitate harmful practices, including the generation of non-consensual deepfakes and the amplification of misogynistic or hypersexualized content. This study presents an exploratory sociotechnical analysis of CivitAI, the most active platform for sharing and developing open-source TTI models. Drawing on a dataset of more than 40 million user-generated images and over 230,000 models, we find a disproportionate rise in not-safe-for-work (NSFW) content and a significant number of models intended to mimic real individuals. We also observe a strong influence of internet subcultures on the tools and practices shaping model personalizations and resulting visual media. In response to these findings, we contextualize the emergence of exploitative visual media through feminist and constructivist perspectives on technology, emphasizing how design choices and community dynamics shape platform outcomes. Building on this analysis, we propose interventions aimed at mitigating downstream harm, including improved content moderation, rethinking tool design, and establishing clearer platform policies to promote accountability and consent.
Related papers
- A Critical Assessment of Modern Generative Models' Ability to Replicate Artistic Styles [0.0]
This paper presents a critical assessment of the style replication capabilities of contemporary generative models.<n>We examine how effectively these models reproduce traditional artistic styles while maintaining structural integrity and compositional balance.<n>The analysis is based on a new large dataset of AI-generated works imitating artistic styles of the past.
arXiv Detail & Related papers (2025-02-21T07:00:06Z) - Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice [186.055899073629]
Unlearning is often invoked as a solution for removing the effects of targeted information from a generative-AI model.<n>Unlearning is also proposed as a way to prevent a model from generating targeted types of information in its outputs.<n>Both of these goals--the targeted removal of information from a model and the targeted suppression of information from a model's outputs--present various technical and substantive challenges.
arXiv Detail & Related papers (2024-12-09T20:18:43Z) - Autoregressive Models in Vision: A Survey [119.23742136065307]
This survey comprehensively examines the literature on autoregressive models applied to vision.
We divide visual autoregressive models into three general sub-categories, including pixel-based, token-based, and scale-based models.
We present a multi-faceted categorization of autoregressive models in computer vision, including image generation, video generation, 3D generation, and multi-modal generation.
arXiv Detail & Related papers (2024-11-08T17:15:12Z) - Civiverse: A Dataset for Analyzing User Engagement with Open-Source Text-to-Image Models [0.7209758868768352]
We analyze the Civiverse prompt dataset, encompassing millions of images and related metadata.
We focus on prompt analysis, specifically examining the semantic characteristics of text prompts.
Our findings reveal a predominant preference for generating explicit content, along with a focus on homogenization of semantic content.
arXiv Detail & Related papers (2024-08-10T21:41:03Z) - Exploring the Use of Abusive Generative AI Models on Civitai [22.509955105958625]
We study the use of Civitai, the largest AIGC social platform, for generating abusive content.
We construct a comprehensive dataset covering 87K models and 2M images.
We discuss strategies for moderation to better govern these platforms.
arXiv Detail & Related papers (2024-07-16T06:18:03Z) - A Survey on Personalized Content Synthesis with Diffusion Models [53.79316736660402]
This paper introduces the general frameworks of PCS research, which can be categorized into test-time fine-tuning (TTF) and pre-trained adaptation (PTA) approaches.<n>We explore specialized tasks within the field, such as object, face, and style personalization, while highlighting their unique challenges and innovations.<n>Despite the promising progress, we also discuss ongoing challenges, including overfitting and the trade-off between subject fidelity and text alignment.
arXiv Detail & Related papers (2024-05-09T04:36:04Z) - Multimodal Large Language Model is a Human-Aligned Annotator for Text-to-Image Generation [87.50120181861362]
VisionPrefer is a high-quality and fine-grained preference dataset that captures multiple preference aspects.
We train a reward model VP-Score over VisionPrefer to guide the training of text-to-image generative models and the preference prediction accuracy of VP-Score is comparable to human annotators.
arXiv Detail & Related papers (2024-04-23T14:53:15Z) - User Modeling and User Profiling: A Comprehensive Survey [0.0]
This paper presents a survey of the current state, evolution, and future directions of user modeling and profiling research.
We provide a historical overview, tracing the development from early stereotype models to the latest deep learning techniques.
We also address the critical need for privacy-preserving techniques and the push towards explainability and fairness in user modeling approaches.
arXiv Detail & Related papers (2024-02-15T02:06:06Z) - Language Agents for Detecting Implicit Stereotypes in Text-to-image
Models at Scale [45.64096601242646]
We introduce a novel agent architecture tailored for stereotype detection in text-to-image models.
We build the stereotype-relevant benchmark based on multiple open-text datasets.
We find that these models often display serious stereotypes when it comes to certain prompts about personal characteristics.
arXiv Detail & Related papers (2023-10-18T08:16:29Z) - State of the Art on Diffusion Models for Visual Computing [191.6168813012954]
This report introduces the basic mathematical concepts of diffusion models, implementation details and design choices of the popular Stable Diffusion model.
We also give a comprehensive overview of the rapidly growing literature on diffusion-based generation and editing.
We discuss available datasets, metrics, open challenges, and social implications.
arXiv Detail & Related papers (2023-10-11T05:32:29Z) - Modeling Content Creator Incentives on Algorithm-Curated Platforms [76.53541575455978]
We study how algorithmic choices affect the existence and character of (Nash) equilibria in exposure games.
We propose tools for numerically finding equilibria in exposure games, and illustrate results of an audit on the MovieLens and LastFM datasets.
arXiv Detail & Related papers (2022-06-27T08:16:59Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.