Smiling Women Pitching Down: Auditing Representational and
Presentational Gender Biases in Image Generative AI
- URL: http://arxiv.org/abs/2305.10566v1
- Date: Wed, 17 May 2023 20:59:10 GMT
- Title: Smiling Women Pitching Down: Auditing Representational and
Presentational Gender Biases in Image Generative AI
- Authors: Luhang Sun, Mian Wei, Yibing Sun, Yoo Ji Suh, Liwei Shen, Sijia Yang
- Abstract summary: We examined the prevalence of two occupational gender biases in 15,300 DALL-E 2 images spanning 153 occupations.
DALL-E 2 underrepresents women in male-dominated fields while overrepresenting them in female-dominated occupations.
Our computational algorithm auditing study demonstrates more pronounced representational and presentational biases in DALL-E 2 compared to Google Images.
- Score: 0.6990493129893111
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Generative AI models like DALL-E 2 can interpret textual prompts and generate
high-quality images exhibiting human creativity. Though public enthusiasm is
booming, systematic auditing of potential gender biases in AI-generated images
remains scarce. We addressed this gap by examining the prevalence of two
occupational gender biases (representational and presentational biases) in
15,300 DALL-E 2 images spanning 153 occupations, and assessed potential bias
amplification by benchmarking against 2021 census labor statistics and Google
Images. Our findings reveal that DALL-E 2 underrepresents women in
male-dominated fields while overrepresenting them in female-dominated
occupations. Additionally, DALL-E 2 images tend to depict more women than men
with smiling faces and downward-pitching heads, particularly in
female-dominated (vs. male-dominated) occupations. Our computational algorithm
auditing study demonstrates more pronounced representational and presentational
biases in DALL-E 2 compared to Google Images and calls for feminist
interventions to prevent such bias-laden AI-generated images to feedback into
the media ecology.
Related papers
- Revealing and Reducing Gender Biases in Vision and Language Assistants (VLAs) [82.57490175399693]
We study gender bias in 22 popular image-to-text vision-language assistants (VLAs)
Our results show that VLAs replicate human biases likely present in the data, such as real-world occupational imbalances.
To eliminate the gender bias in these models, we find that finetuning-based debiasing methods achieve the best tradeoff between debiasing and retaining performance on downstream tasks.
arXiv Detail & Related papers (2024-10-25T05:59:44Z) - She Works, He Works: A Curious Exploration of Gender Bias in AI-Generated Imagery [0.0]
This paper examines gender bias in AI-generated imagery of construction workers, highlighting discrepancies in the portrayal of male and female figures.
Grounded in Griselda Pollock's theories on visual culture and gender, the analysis reveals that AI models tend to sexualize female figures while portraying male figures as more authoritative and competent.
arXiv Detail & Related papers (2024-07-26T05:56:18Z) - GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing [72.0343083866144]
This paper introduces the GenderBias-emphVL benchmark to evaluate occupation-related gender bias in Large Vision-Language Models.
Using our benchmark, we extensively evaluate 15 commonly used open-source LVLMs and state-of-the-art commercial APIs.
Our findings reveal widespread gender biases in existing LVLMs.
arXiv Detail & Related papers (2024-06-30T05:55:15Z) - Bias in Generative AI [2.5830293457323266]
This study analyzed images generated by three popular generative artificial intelligence (AI) tools to investigate potential bias in AI generators.
All three AI generators exhibited bias against women and African Americans.
Women were depicted as younger with more smiles and happiness, while men were depicted as older with more neutral expressions and anger.
arXiv Detail & Related papers (2024-03-05T07:34:41Z) - The Male CEO and the Female Assistant: Evaluation and Mitigation of Gender Biases in Text-To-Image Generation of Dual Subjects [58.27353205269664]
We propose the Paired Stereotype Test (PST) framework, which queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
PST queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
Using PST, we evaluate two aspects of gender biases -- the well-known bias in gendered occupation and a novel aspect: bias in organizational power.
arXiv Detail & Related papers (2024-02-16T21:32:27Z) - Stable Diffusion Exposed: Gender Bias from Prompt to Image [25.702257177921048]
This paper introduces an evaluation protocol that analyzes the impact of gender indicators at every step of the generation process on Stable Diffusion images.
Our findings include the existence of differences in the depiction of objects, such as instruments tailored for specific genders, and shifts in overall layouts.
arXiv Detail & Related papers (2023-12-05T10:12:59Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Auditing Gender Presentation Differences in Text-to-Image Models [54.16959473093973]
We study how gender is presented differently in text-to-image models.
By probing gender indicators in the input text, we quantify the frequency differences of presentation-centric attributes.
We propose an automatic method to estimate such differences.
arXiv Detail & Related papers (2023-02-07T18:52:22Z) - Towards Understanding Gender-Seniority Compound Bias in Natural Language
Generation [64.65911758042914]
We investigate how seniority impacts the degree of gender bias exhibited in pretrained neural generation models.
Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.
These results suggest that NLP applications built using GPT-2 may harm women in professional capacities.
arXiv Detail & Related papers (2022-05-19T20:05:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.