More of the Same: Persistent Representational Harms Under Increased Representation
- URL: http://arxiv.org/abs/2503.00333v1
- Date: Sat, 01 Mar 2025 03:45:35 GMT
- Title: More of the Same: Persistent Representational Harms Under Increased Representation
- Authors: Jennifer Mickel, Maria De-Arteaga, Leqi Liu, Kevin Tian,
- Abstract summary: We show that women are more represented than men when models are prompted to generate biographies or personas.<n>This results in a proliferation of representational harms, stereotypes, and neoliberalism ideals.
- Score: 10.675712374796356
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: To recognize and mitigate the harms of generative AI systems, it is crucial to consider who is represented in the outputs of generative AI systems and how people are represented. A critical gap emerges when naively improving who is represented, as this does not imply bias mitigation efforts have been applied to address how people are represented. We critically examined this by investigating gender representation in occupation across state-of-the-art large language models. We first show evidence suggesting that over time there have been interventions to models altering the resulting gender distribution, and we find that women are more represented than men when models are prompted to generate biographies or personas. We then demonstrate that representational biases persist in how different genders are represented by examining statistically significant word differences across genders. This results in a proliferation of representational harms, stereotypes, and neoliberalism ideals that, despite existing interventions to increase female representation, reinforce existing systems of oppression.
Related papers
- Gender Bias in Instruction-Guided Speech Synthesis Models [55.2480439325792]
This study investigates the potential gender bias in how models interpret occupation-related prompts.<n>We explore whether these models exhibit tendencies to amplify gender stereotypes when interpreting such prompts.<n>Our experimental results reveal the model's tendency to exhibit gender bias for certain occupations.
arXiv Detail & Related papers (2025-02-08T17:38:24Z) - Evaluating Gender Bias in Large Language Models [0.8636148452563583]
The study examines the extent to which Large Language Models (LLMs) exhibit gender bias in pronoun selection in occupational contexts.
The jobs considered include a range of occupations, from those with a significant male presence to those with a notable female concentration.
The results show a positive correlation between the models' pronoun choices and the gender distribution present in U.S. labor force data.
arXiv Detail & Related papers (2024-11-14T22:23:13Z) - Generalizing Fairness to Generative Language Models via Reformulation of Non-discrimination Criteria [4.738231680800414]
This paper studies how to uncover and quantify the presence of gender biases in generative language models.
We derive generative AI analogues of three well-known non-discrimination criteria from classification, namely independence, separation and sufficiency.
Our results address the presence of occupational gender bias within such conversational language models.
arXiv Detail & Related papers (2024-03-13T14:19:08Z) - The Male CEO and the Female Assistant: Evaluation and Mitigation of Gender Biases in Text-To-Image Generation of Dual Subjects [58.27353205269664]
We propose the Paired Stereotype Test (PST) framework, which queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
PST queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
Using PST, we evaluate two aspects of gender biases -- the well-known bias in gendered occupation and a novel aspect: bias in organizational power.
arXiv Detail & Related papers (2024-02-16T21:32:27Z) - Stable Diffusion Exposed: Gender Bias from Prompt to Image [25.702257177921048]
This paper introduces an evaluation protocol that analyzes the impact of gender indicators at every step of the generation process on Stable Diffusion images.
Our findings include the existence of differences in the depiction of objects, such as instruments tailored for specific genders, and shifts in overall layouts.
arXiv Detail & Related papers (2023-12-05T10:12:59Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Stereotypes and Smut: The (Mis)representation of Non-cisgender
Identities by Text-to-Image Models [6.92043136971035]
We investigate how multimodal models handle diverse gender identities.
We find certain non-cisgender identities are consistently (mis)represented as less human, more stereotyped and more sexualised.
These improvements could pave the way for a future where change is led by the affected community.
arXiv Detail & Related papers (2023-05-26T16:28:49Z) - Fairness in AI Systems: Mitigating gender bias from language-vision
models [0.913755431537592]
We study the extent of the impact of gender bias in existing datasets.
We propose a methodology to mitigate its impact in caption based language vision models.
arXiv Detail & Related papers (2023-05-03T04:33:44Z) - Stable Bias: Analyzing Societal Representations in Diffusion Models [72.27121528451528]
We propose a new method for exploring the social biases in Text-to-Image (TTI) systems.
Our approach relies on characterizing the variation in generated images triggered by enumerating gender and ethnicity markers in the prompts.
We leverage this method to analyze images generated by 3 popular TTI systems and find that while all of their outputs show correlations with US labor demographics, they also consistently under-represent marginalized identities to different extents.
arXiv Detail & Related papers (2023-03-20T19:32:49Z) - Easily Accessible Text-to-Image Generation Amplifies Demographic
Stereotypes at Large Scale [61.555788332182395]
We investigate the potential for machine learning models to amplify dangerous and complex stereotypes.
We find a broad range of ordinary prompts produce stereotypes, including prompts simply mentioning traits, descriptors, occupations, or objects.
arXiv Detail & Related papers (2022-11-07T18:31:07Z) - Are Commercial Face Detection Models as Biased as Academic Models? [64.71318433419636]
We compare academic and commercial face detection systems, specifically examining robustness to noise.
We find that state-of-the-art academic face detection models exhibit demographic disparities in their noise robustness.
We conclude that commercial models are always as biased or more biased than an academic model.
arXiv Detail & Related papers (2022-01-25T02:21:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.