Draw an Ugly Person An Exploration of Generative AIs Perceptions of Ugliness
- URL: http://arxiv.org/abs/2507.12212v1
- Date: Wed, 16 Jul 2025 13:16:56 GMT
- Title: Draw an Ugly Person An Exploration of Generative AIs Perceptions of Ugliness
- Authors: Garyoung Kim, Huisung Kwon, Seoju Yun, Yu-Won Youn,
- Abstract summary: Generative AI does not only replicate human creativity but also reproduces deep-seated cultural biases.<n>This study investigates how four different generative AI models understand and express ugliness through text and image.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative AI does not only replicate human creativity but also reproduces deep-seated cultural biases, making it crucial to critically examine how concepts like ugliness are understood and expressed by these tools. This study investigates how four different generative AI models understand and express ugliness through text and image and explores the biases embedded within these representations. We extracted 13 adjectives associated with ugliness through iterative prompting of a large language model and generated 624 images across four AI models and three prompts. Demographic and socioeconomic attributes within the images were independently coded and thematically analyzed. Our findings show that AI models disproportionately associate ugliness with old white male figures, reflecting entrenched social biases as well as paradoxical biases, where efforts to avoid stereotypical depictions of marginalized groups inadvertently result in the disproportionate projection of negative attributes onto majority groups. Qualitative analysis further reveals that, despite supposed attempts to frame ugliness within social contexts, conventional physical markers such as asymmetry and aging persist as central visual motifs. These findings demonstrate that despite attempts to create more equal representations, generative AI continues to perpetuate inherited and paradoxical biases, underscoring the critical work being done to create ethical AI training paradigms and advance methodologies for more inclusive AI development.
Related papers
- When Cars Have Stereotypes: Auditing Demographic Bias in Objects from Text-to-Image Models [4.240144901142787]
We introduce SODA (Stereotyped Object Diagnostic Audit), a novel framework for measuring such biases.<n>Our approach compares visual attributes of objects generated with demographic cues to those from neutral prompts.<n>We uncover strong associations between specific demographic groups and visual attributes, such as recurring color patterns prompted by gender or ethnicity cues.
arXiv Detail & Related papers (2025-08-05T14:15:53Z) - Thinking with Images for Multimodal Reasoning: Foundations, Methods, and Future Frontiers [90.4459196223986]
A similar evolution is now unfolding in AI, marking a paradigm shift from models that merely think about images to those that can truly think with images.<n>This emerging paradigm is characterized by models leveraging visual information as intermediate steps in their thought process, transforming vision from a passive input into a dynamic, manipulable cognitive workspace.
arXiv Detail & Related papers (2025-06-30T14:48:35Z) - Using complex prompts to identify fine-grained biases in image generation through ChatGPT-4o [0.0]
Two dimensions of bias can be revealed through the study of large AI models.<n>Not only bias in training data or the products of an AI, but also bias in society.<n>I briefly discuss how we can use complex prompts to image generation AI to investigate either dimension of bias.
arXiv Detail & Related papers (2025-04-01T03:17:35Z) - Exploring Bias in over 100 Text-to-Image Generative Models [49.60774626839712]
We investigate bias trends in text-to-image generative models over time, focusing on the increasing availability of models through open platforms like Hugging Face.<n>We assess bias across three key dimensions: (i) distribution bias, (ii) generative hallucination, and (iii) generative miss-rate.<n>Our findings indicate that artistic and style-transferred models exhibit significant bias, whereas foundation models, benefiting from broader training distributions, are becoming progressively less biased.
arXiv Detail & Related papers (2025-03-11T03:40:44Z) - Alien Recombination: Exploring Concept Blends Beyond Human Cognitive Availability in Visual Art [90.8684263806649]
We show how AI can transcend human cognitive limitations in visual art creation.
Our research hypothesizes that visual art contains a vast unexplored space of conceptual combinations.
We present the Alien Recombination method to identify and generate concept combinations that lie beyond human cognitive availability.
arXiv Detail & Related papers (2024-11-18T11:55:38Z) - Safeguard Text-to-Image Diffusion Models with Human Feedback Inversion [51.931083971448885]
We propose a framework named Human Feedback Inversion (HFI), where human feedback on model-generated images is condensed into textual tokens guiding the mitigation or removal of problematic images.
Our experimental results demonstrate our framework significantly reduces objectionable content generation while preserving image quality, contributing to the ethical deployment of AI in the public sphere.
arXiv Detail & Related papers (2024-07-17T05:21:41Z) - Disability Representations: Finding Biases in Automatic Image Generation [0.0]
This study investigates the representation biases in popular image generation models towards people with disabilities (PWD)
The results indicate a significant bias, with most generated images portraying disabled individuals as old, sad, and predominantly using manual wheelchairs.
These findings highlight the urgent need for more inclusive AI development, ensuring diverse and accurate representation of PWD in generated images.
arXiv Detail & Related papers (2024-06-21T09:12:31Z) - Quality Assessment for AI Generated Images with Instruction Tuning [58.41087653543607]
We first establish a novel Image Quality Assessment (IQA) database for AIGIs, termed AIGCIQA2023+.<n>This paper presents a MINT-IQA model to evaluate and explain human preferences for AIGIs from Multi-perspectives with INstruction Tuning.
arXiv Detail & Related papers (2024-05-12T17:45:11Z) - Exploring the Naturalness of AI-Generated Images [59.04528584651131]
We take the first step to benchmark and assess the visual naturalness of AI-generated images.
We propose the Joint Objective Image Naturalness evaluaTor (JOINT), to automatically predict the naturalness of AGIs that aligns human ratings.
We demonstrate that JOINT significantly outperforms baselines for providing more subjectively consistent results on naturalness assessment.
arXiv Detail & Related papers (2023-12-09T06:08:09Z) - TIBET: Identifying and Evaluating Biases in Text-to-Image Generative Models [22.076898042211305]
We propose a general approach to study and quantify a broad spectrum of biases, for any TTI model and for any prompt.
Our approach automatically identifies potential biases that might be relevant to the given prompt, and measures those biases.
We show that our method is uniquely capable of explaining complex multi-dimensional biases through semantic concepts.
arXiv Detail & Related papers (2023-12-03T02:31:37Z) - Unmaking AI Imagemaking: A Methodological Toolkit for Critical
Investigation [0.0]
We provide three methodological approaches for investigating AI image models.
Unmaking the ecosystem analyzes the values, structures, and incentives surrounding the model's production.
Unmaking the output analyzes the model's generative results, revealing its logics.
arXiv Detail & Related papers (2023-07-19T05:26:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.