Analysing Gender Bias in Text-to-Image Models using Object Detection
- URL: http://arxiv.org/abs/2307.08025v1
- Date: Sun, 16 Jul 2023 12:31:29 GMT
- Title: Analysing Gender Bias in Text-to-Image Models using Object Detection
- Authors: Harvey Mannering
- Abstract summary: Using paired prompts that specify gender and vaguely reference an object we can examine whether certain objects are associated with a certain gender.
Male prompts generated objects such as ties, knives, trucks, baseball bats, and bicycles more frequently.
Female prompts were more likely to generate objects such as handbags, umbrellas, bowls, bottles, and cups.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: This work presents a novel strategy to measure bias in text-to-image models.
Using paired prompts that specify gender and vaguely reference an object (e.g.
"a man/woman holding an item") we can examine whether certain objects are
associated with a certain gender. In analysing results from Stable Diffusion,
we observed that male prompts generated objects such as ties, knives, trucks,
baseball bats, and bicycles more frequently. On the other hand, female prompts
were more likely to generate objects such as handbags, umbrellas, bowls,
bottles, and cups. We hope that the method outlined here will be a useful tool
for examining bias in text-to-image models.
Related papers
- Computational Analysis of Gender Depiction in the Comedias of Calderón de la Barca [6.978406757882009]
We develop methods to study gender depiction in the non-religious works (comedias) of Pedro Calder'on de la Barca.
We gather insights from a corpus of more than 100 plays by using a gender classifier and applying model explainability (attribution) methods.
We find that female and male characters are portrayed differently and can be identified by the gender prediction model at practically useful accuracies.
arXiv Detail & Related papers (2024-11-06T13:13:33Z) - Reflecting the Male Gaze: Quantifying Female Objectification in 19th and 20th Century Novels [3.0623865942628594]
We propose a framework for analyzing gender bias in terms of female objectification.
Our framework measures female objectification along two axes.
Applying our framework to 19th and 20th century novels reveals evidence of female objectification.
arXiv Detail & Related papers (2024-03-25T20:16:14Z) - Multilingual Text-to-Image Generation Magnifies Gender Stereotypes and Prompt Engineering May Not Help You [64.74707085021858]
We show that multilingual models suffer from significant gender biases just as monolingual models do.
We propose a novel benchmark, MAGBIG, intended to foster research on gender bias in multilingual models.
Our results show that not only do models exhibit strong gender biases but they also behave differently across languages.
arXiv Detail & Related papers (2024-01-29T12:02:28Z) - Women Wearing Lipstick: Measuring the Bias Between an Object and Its
Related Gender [1.4322753787990035]
We investigate the impact of objects on gender bias in image captioning systems.
We propose a visual semantic-based gender score that measures the degree of bias and can be used as a plug-in for any image captioning system.
arXiv Detail & Related papers (2023-10-29T19:39:03Z) - Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender
Perturbation over Fairytale Texts [87.62403265382734]
Recent studies show that traditional fairytales are rife with harmful gender biases.
This work aims to assess learned biases of language models by evaluating their robustness against gender perturbations.
arXiv Detail & Related papers (2023-10-16T22:25:09Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Auditing Gender Presentation Differences in Text-to-Image Models [54.16959473093973]
We study how gender is presented differently in text-to-image models.
By probing gender indicators in the input text, we quantify the frequency differences of presentation-centric attributes.
We propose an automatic method to estimate such differences.
arXiv Detail & Related papers (2023-02-07T18:52:22Z) - Gender Artifacts in Visual Datasets [34.74191865400569]
We investigate what $textitgender artifacts$ exist within large-scale visual datasets.
We find that gender artifacts are ubiquitous in the COCO and OpenImages datasets.
We claim that attempts to remove gender artifacts from such datasets are largely infeasible.
arXiv Detail & Related papers (2022-06-18T12:09:19Z) - DALL-Eval: Probing the Reasoning Skills and Social Biases of
Text-to-Image Generation Models [73.12069620086311]
We investigate the visual reasoning capabilities and social biases of text-to-image models.
First, we measure three visual reasoning skills: object recognition, object counting, and spatial relation understanding.
Second, we assess the gender and skin tone biases by measuring the gender/skin tone distribution of generated images.
arXiv Detail & Related papers (2022-02-08T18:36:52Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.