Stable Diffusion Exposed: Gender Bias from Prompt to Image
- URL: http://arxiv.org/abs/2312.03027v1
- Date: Tue, 5 Dec 2023 10:12:59 GMT
- Title: Stable Diffusion Exposed: Gender Bias from Prompt to Image
- Authors: Yankun Wu, Yuta Nakashima, Noa Garcia
- Abstract summary: We introduce an evaluation protocol designed to analyze the impact of gender indicators on Stable Diffusion images.
Our findings include the existence of differences in the depiction of objects, such as instruments tailored for specific genders.
We also reveal that neutral prompts tend to produce images more aligned with masculine prompts than their feminine counterparts.
- Score: 28.88676131961107
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent studies have highlighted biases in generative models, shedding light
on their predisposition towards gender-based stereotypes and imbalances. This
paper contributes to this growing body of research by introducing an evaluation
protocol designed to automatically analyze the impact of gender indicators on
Stable Diffusion images. Leveraging insights from prior work, we explore how
gender indicators not only affect gender presentation but also the
representation of objects and layouts within the generated images. Our findings
include the existence of differences in the depiction of objects, such as
instruments tailored for specific genders, and shifts in overall layouts. We
also reveal that neutral prompts tend to produce images more aligned with
masculine prompts than their feminine counterparts, providing valuable insights
into the nuanced gender biases inherent in Stable Diffusion.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing [72.0343083866144]
This paper introduces the GenderBias-emphVL benchmark to evaluate occupation-related gender bias in Large Vision-Language Models.
Using our benchmark, we extensively evaluate 15 commonly used open-source LVLMs and state-of-the-art commercial APIs.
Our findings reveal widespread gender biases in existing LVLMs.
arXiv Detail & Related papers (2024-06-30T05:55:15Z) - MoESD: Mixture of Experts Stable Diffusion to Mitigate Gender Bias [23.10522891268232]
We show that this bias is already present in the text encoder of the model.
We propose MoESD (Mixture of Experts Stable Diffusion) with BiAs (Bias Adapters) to mitigate gender bias.
arXiv Detail & Related papers (2024-06-25T14:59:31Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Gender Bias in Transformer Models: A comprehensive survey [1.1011268090482573]
Gender bias in artificial intelligence (AI) has emerged as a pressing concern with profound implications for individuals' lives.
This paper presents a comprehensive survey that explores gender bias in Transformer models from a linguistic perspective.
arXiv Detail & Related papers (2023-06-18T11:40:47Z) - Auditing Gender Presentation Differences in Text-to-Image Models [54.16959473093973]
We study how gender is presented differently in text-to-image models.
By probing gender indicators in the input text, we quantify the frequency differences of presentation-centric attributes.
We propose an automatic method to estimate such differences.
arXiv Detail & Related papers (2023-02-07T18:52:22Z) - Gender Stereotyping Impact in Facial Expression Recognition [1.5340540198612824]
In recent years, machine learning-based models have become the most popular approach to Facial Expression Recognition (FER)
In publicly available FER datasets, apparent gender representation is usually mostly balanced, but their representation in the individual label is not.
We generate derivative datasets with different amounts of stereotypical bias by altering the gender proportions of certain labels.
We observe a discrepancy in the recognition of certain emotions between genders of up to $29 %$ under the worst bias conditions.
arXiv Detail & Related papers (2022-10-11T10:52:23Z) - Gender Artifacts in Visual Datasets [34.74191865400569]
We investigate what $textitgender artifacts$ exist within large-scale visual datasets.
We find that gender artifacts are ubiquitous in the COCO and OpenImages datasets.
We claim that attempts to remove gender artifacts from such datasets are largely infeasible.
arXiv Detail & Related papers (2022-06-18T12:09:19Z) - Mitigating Gender Bias in Captioning Systems [56.25457065032423]
Most captioning models learn gender bias, leading to high gender prediction errors, especially for women.
We propose a new Guided Attention Image Captioning model (GAIC) which provides self-guidance on visual attention to encourage the model to capture correct gender visual evidence.
arXiv Detail & Related papers (2020-06-15T12:16:19Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.