Stereotypes and Smut: The (Mis)representation of Non-cisgender
Identities by Text-to-Image Models
- URL: http://arxiv.org/abs/2305.17072v1
- Date: Fri, 26 May 2023 16:28:49 GMT
- Title: Stereotypes and Smut: The (Mis)representation of Non-cisgender
Identities by Text-to-Image Models
- Authors: Eddie L. Ungless, Bj\"orn Ross and Anne Lauscher
- Abstract summary: We investigate how multimodal models handle diverse gender identities.
We find certain non-cisgender identities are consistently (mis)represented as less human, more stereotyped and more sexualised.
These improvements could pave the way for a future where change is led by the affected community.
- Score: 6.92043136971035
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cutting-edge image generation has been praised for producing high-quality
images, suggesting a ubiquitous future in a variety of applications. However,
initial studies have pointed to the potential for harm due to predictive bias,
reflecting and potentially reinforcing cultural stereotypes. In this work, we
are the first to investigate how multimodal models handle diverse gender
identities. Concretely, we conduct a thorough analysis in which we compare the
output of three image generation models for prompts containing cisgender vs.
non-cisgender identity terms. Our findings demonstrate that certain
non-cisgender identities are consistently (mis)represented as less human, more
stereotyped and more sexualised. We complement our experimental analysis with
(a)~a survey among non-cisgender individuals and (b) a series of interviews, to
establish which harms affected individuals anticipate, and how they would like
to be represented. We find respondents are particularly concerned about
misrepresentation, and the potential to drive harmful behaviours and beliefs.
Simple heuristics to limit offensive content are widely rejected, and instead
respondents call for community involvement, curated training data and the
ability to customise. These improvements could pave the way for a future where
change is led by the affected community, and technology is used to positively
``[portray] queerness in ways that we haven't even thought of'' rather than
reproducing stale, offensive stereotypes.
Related papers
- Challenging Negative Gender Stereotypes: A Study on the Effectiveness of Automated Counter-Stereotypes [12.704072523930444]
This study investigates eleven strategies to automatically counter-act and challenge gender stereotypes in online communications.
We present AI-generated gender-based counter-stereotypes to study participants and ask them to assess their offensiveness, plausibility, and potential effectiveness.
arXiv Detail & Related papers (2024-04-18T01:48:28Z) - Revisiting The Classics: A Study on Identifying and Rectifying Gender Stereotypes in Rhymes and Poems [0.0]
The work contributes by gathering a dataset of rhymes and poems to identify gender stereotypes and propose a model with 97% accuracy to identify gender bias.
Gender stereotypes were rectified using a Large Language Model (LLM) and its effectiveness was evaluated in a comparative survey against human educator rectifications.
arXiv Detail & Related papers (2024-03-18T13:02:02Z) - The Male CEO and the Female Assistant: Evaluation and Mitigation of Gender Biases in Text-To-Image Generation of Dual Subjects [58.27353205269664]
We propose the Paired Stereotype Test (PST) framework, which queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
PST queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
Using PST, we evaluate two aspects of gender biases -- the well-known bias in gendered occupation and a novel aspect: bias in organizational power.
arXiv Detail & Related papers (2024-02-16T21:32:27Z) - Stable Diffusion Exposed: Gender Bias from Prompt to Image [25.702257177921048]
This paper introduces an evaluation protocol that analyzes the impact of gender indicators at every step of the generation process on Stable Diffusion images.
Our findings include the existence of differences in the depiction of objects, such as instruments tailored for specific genders, and shifts in overall layouts.
arXiv Detail & Related papers (2023-12-05T10:12:59Z) - Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender
Perturbation over Fairytale Texts [87.62403265382734]
Recent studies show that traditional fairytales are rife with harmful gender biases.
This work aims to assess learned biases of language models by evaluating their robustness against gender perturbations.
arXiv Detail & Related papers (2023-10-16T22:25:09Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Easily Accessible Text-to-Image Generation Amplifies Demographic
Stereotypes at Large Scale [61.555788332182395]
We investigate the potential for machine learning models to amplify dangerous and complex stereotypes.
We find a broad range of ordinary prompts produce stereotypes, including prompts simply mentioning traits, descriptors, occupations, or objects.
arXiv Detail & Related papers (2022-11-07T18:31:07Z) - How well can Text-to-Image Generative Models understand Ethical Natural
Language Interventions? [67.97752431429865]
We study the effect on the diversity of the generated images when adding ethical intervention.
Preliminary studies indicate that a large change in the model predictions is triggered by certain phrases such as 'irrespective of gender'
arXiv Detail & Related papers (2022-10-27T07:32:39Z) - DALL-Eval: Probing the Reasoning Skills and Social Biases of
Text-to-Image Generation Models [73.12069620086311]
We investigate the visual reasoning capabilities and social biases of text-to-image models.
First, we measure three visual reasoning skills: object recognition, object counting, and spatial relation understanding.
Second, we assess the gender and skin tone biases by measuring the gender/skin tone distribution of generated images.
arXiv Detail & Related papers (2022-02-08T18:36:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.