Erasing 'Ugly' from the Internet: Propagation of the Beauty Myth in Text-Image Models
- URL: http://arxiv.org/abs/2511.00749v2
- Date: Tue, 04 Nov 2025 22:07:28 GMT
- Title: Erasing 'Ugly' from the Internet: Propagation of the Beauty Myth in Text-Image Models
- Authors: Tanvi Dinkar, Aiqi Jiang, Gavin Abercrombie, Ioannis Konstas,
- Abstract summary: This work is to study how generative AI models may encode 'beauty' and erase 'ugliness'<n>We develop a structured beauty taxonomy which we use to prompt three language models and two text-to-image models to cumulatively generate 5984 images.<n>We then recruit women and non-binary social media users to evaluate 1200 of the images through a Likert-scale within-subjects study.<n>Results show that 86.5% of generated images depicted people with lighter skin tones, 22% contained explicit content despite Safe for Work (SFW) training, and 74% were rated as being in a younger age demographic
- Score: 6.917327316794737
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social media has exacerbated the promotion of Western beauty norms, leading to negative self-image, particularly in women and girls, and causing harm such as body dysmorphia. Increasingly content on the internet has been artificially generated, leading to concerns that these norms are being exaggerated. The aim of this work is to study how generative AI models may encode 'beauty' and erase 'ugliness', and discuss the implications of this for society. To investigate these aims, we create two image generation pipelines: a text-to-image model and a text-to-language model-to image model. We develop a structured beauty taxonomy which we use to prompt three language models (LMs) and two text-to-image models to cumulatively generate 5984 images using our two pipelines. We then recruit women and non-binary social media users to evaluate 1200 of the images through a Likert-scale within-subjects study. Participants show high agreement in their ratings. Our results show that 86.5% of generated images depicted people with lighter skin tones, 22% contained explicit content despite Safe for Work (SFW) training, and 74% were rated as being in a younger age demographic. In particular, the images of non-binary individuals were rated as both younger and more hypersexualised, indicating troubling intersectional effects. Notably, prompts encoded with 'negative' or 'ugly' beauty traits (such as "a wide nose") consistently produced higher Not SFW (NSFW) ratings regardless of gender. This work sheds light on the pervasive demographic biases related to beauty standards present in generative AI models -- biases that are actively perpetuated by model developers, such as via negative prompting. We conclude by discussing the implications of this on society, which include pollution of the data streams and active erasure of features that do not fall inside the stereotype of what is considered beautiful by developers.
Related papers
- Adultification Bias in LLMs and Text-to-Image Models [55.02903075972816]
We study bias along axes of race and gender in young girls.<n>We focus on "adultification bias," a phenomenon in which Black girls are presumed to be more defiant, sexually intimate, and culpable than their White peers.
arXiv Detail & Related papers (2025-06-08T21:02:33Z) - A Taxonomy of the Biases of the Images created by Generative Artificial Intelligence [2.0257616108612373]
Generative artificial intelligence models show an amazing performance creating unique content automatically just by being given a prompt by the user.
We analyze in detail how the generated content by these models can be strongly biased with respect to a plethora of variables.
We discuss the social, political and economical implications of these biases and possible ways to mitigate them.
arXiv Detail & Related papers (2024-05-02T22:01:28Z) - The Male CEO and the Female Assistant: Evaluation and Mitigation of Gender Biases in Text-To-Image Generation of Dual Subjects [58.27353205269664]
We propose the Paired Stereotype Test (PST) framework, which queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.<n>PST queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.<n>Using PST, we evaluate two aspects of gender biases -- the well-known bias in gendered occupation and a novel aspect: bias in organizational power.
arXiv Detail & Related papers (2024-02-16T21:32:27Z) - New Job, New Gender? Measuring the Social Bias in Image Generation Models [85.26441602999014]
Image generation models are susceptible to generating content that perpetuates social stereotypes and biases.
We propose BiasPainter, a framework that can accurately, automatically and comprehensively trigger social bias in image generation models.
BiasPainter can achieve 90.8% accuracy on automatic bias detection, which is significantly higher than the results reported in previous work.
arXiv Detail & Related papers (2024-01-01T14:06:55Z) - Situating the social issues of image generation models in the model life cycle: a sociotechnical approach [20.99805435959377]
This paper reports on a novel, comprehensive categorization of the social issues associated with image generation models.
We identify seven issue clusters arising from image generation models: data issues, intellectual property, bias, privacy, and the impacts on the informational, cultural, and natural environments.
We argue that the risks posed by image generation models are comparable in severity to the risks posed by large language models.
arXiv Detail & Related papers (2023-11-30T08:32:32Z) - Auditing Gender Presentation Differences in Text-to-Image Models [54.16959473093973]
We study how gender is presented differently in text-to-image models.
By probing gender indicators in the input text, we quantify the frequency differences of presentation-centric attributes.
We propose an automatic method to estimate such differences.
arXiv Detail & Related papers (2023-02-07T18:52:22Z) - Contrastive Language-Vision AI Models Pretrained on Web-Scraped
Multimodal Data Exhibit Sexual Objectification Bias [11.6727088473067]
We show that language-vision AI models trained on web scrapes learn biases of sexual objectification.
Images of female professionals are likely to be associated with sexual descriptions relative to images of male professionals.
arXiv Detail & Related papers (2022-12-21T18:54:19Z) - How well can Text-to-Image Generative Models understand Ethical Natural
Language Interventions? [67.97752431429865]
We study the effect on the diversity of the generated images when adding ethical intervention.
Preliminary studies indicate that a large change in the model predictions is triggered by certain phrases such as 'irrespective of gender'
arXiv Detail & Related papers (2022-10-27T07:32:39Z) - DALL-Eval: Probing the Reasoning Skills and Social Biases of
Text-to-Image Generation Models [73.12069620086311]
We investigate the visual reasoning capabilities and social biases of text-to-image models.
First, we measure three visual reasoning skills: object recognition, object counting, and spatial relation understanding.
Second, we assess the gender and skin tone biases by measuring the gender/skin tone distribution of generated images.
arXiv Detail & Related papers (2022-02-08T18:36:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.