Smiling Women Pitching Down: Auditing Representational and
Presentational Gender Biases in Image Generative AI
- URL: http://arxiv.org/abs/2305.10566v1
- Date: Wed, 17 May 2023 20:59:10 GMT
- Title: Smiling Women Pitching Down: Auditing Representational and
Presentational Gender Biases in Image Generative AI
- Authors: Luhang Sun, Mian Wei, Yibing Sun, Yoo Ji Suh, Liwei Shen, Sijia Yang
- Abstract summary: We examined the prevalence of two occupational gender biases in 15,300 DALL-E 2 images spanning 153 occupations.
DALL-E 2 underrepresents women in male-dominated fields while overrepresenting them in female-dominated occupations.
Our computational algorithm auditing study demonstrates more pronounced representational and presentational biases in DALL-E 2 compared to Google Images.
- Score: 0.6990493129893111
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Generative AI models like DALL-E 2 can interpret textual prompts and generate
high-quality images exhibiting human creativity. Though public enthusiasm is
booming, systematic auditing of potential gender biases in AI-generated images
remains scarce. We addressed this gap by examining the prevalence of two
occupational gender biases (representational and presentational biases) in
15,300 DALL-E 2 images spanning 153 occupations, and assessed potential bias
amplification by benchmarking against 2021 census labor statistics and Google
Images. Our findings reveal that DALL-E 2 underrepresents women in
male-dominated fields while overrepresenting them in female-dominated
occupations. Additionally, DALL-E 2 images tend to depict more women than men
with smiling faces and downward-pitching heads, particularly in
female-dominated (vs. male-dominated) occupations. Our computational algorithm
auditing study demonstrates more pronounced representational and presentational
biases in DALL-E 2 compared to Google Images and calls for feminist
interventions to prevent such bias-laden AI-generated images to feedback into
the media ecology.
Related papers
- GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing [72.0343083866144]
This paper introduces the GenderBias-emphVL benchmark to evaluate occupation-related gender bias in Large Vision-Language Models.
Using our benchmark, we extensively evaluate 15 commonly used open-source LVLMs and state-of-the-art commercial APIs.
Our findings reveal widespread gender biases in existing LVLMs.
arXiv Detail & Related papers (2024-06-30T05:55:15Z) - Bias in Generative AI [2.5830293457323266]
This study analyzed images generated by three popular generative artificial intelligence (AI) tools to investigate potential bias in AI generators.
All three AI generators exhibited bias against women and African Americans.
Women were depicted as younger with more smiles and happiness, while men were depicted as older with more neutral expressions and anger.
arXiv Detail & Related papers (2024-03-05T07:34:41Z) - The Male CEO and the Female Assistant: Gender Biases in Text-To-Image Generation of Dual Subjects [58.27353205269664]
We propose the Paired Stereotype Test (PST) framework to systematically evaluate T2I models in dual-subject generation setting.
PST is a dual-subject generation task, i.e. generating two people in the same image.
We show that despite generating seemingly fair or even anti-stereotype single-person images, DALLE-3 still shows notable biases under PST.
arXiv Detail & Related papers (2024-02-16T21:32:27Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Social Biases through the Text-to-Image Generation Lens [9.137275391251517]
Text-to-Image (T2I) generation is enabling new applications that support creators, designers, and general end users of productivity software.
We take a multi-dimensional approach to studying and quantifying common social biases as reflected in the generated images.
We present findings for two popular T2I models: DALLE-v2 and Stable Diffusion.
arXiv Detail & Related papers (2023-03-30T05:29:13Z) - Auditing Gender Presentation Differences in Text-to-Image Models [54.16959473093973]
We study how gender is presented differently in text-to-image models.
By probing gender indicators in the input text, we quantify the frequency differences of presentation-centric attributes.
We propose an automatic method to estimate such differences.
arXiv Detail & Related papers (2023-02-07T18:52:22Z) - Contrastive Language-Vision AI Models Pretrained on Web-Scraped
Multimodal Data Exhibit Sexual Objectification Bias [11.6727088473067]
We show that language-vision AI models trained on web scrapes learn biases of sexual objectification.
Images of female professionals are likely to be associated with sexual descriptions relative to images of male professionals.
arXiv Detail & Related papers (2022-12-21T18:54:19Z) - Towards Understanding Gender-Seniority Compound Bias in Natural Language
Generation [64.65911758042914]
We investigate how seniority impacts the degree of gender bias exhibited in pretrained neural generation models.
Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.
These results suggest that NLP applications built using GPT-2 may harm women in professional capacities.
arXiv Detail & Related papers (2022-05-19T20:05:02Z) - Are Gender-Neutral Queries Really Gender-Neutral? Mitigating Gender Bias
in Image Search [8.730027941735804]
We study a unique gender bias in image search.
The search images are often gender-imbalanced for gender-neutral natural language queries.
We introduce two novel debiasing approaches.
arXiv Detail & Related papers (2021-09-12T04:47:33Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.