Women Wearing Lipstick: Measuring the Bias Between an Object and Its
Related Gender
- URL: http://arxiv.org/abs/2310.19130v2
- Date: Mon, 20 Nov 2023 12:15:19 GMT
- Title: Women Wearing Lipstick: Measuring the Bias Between an Object and Its
Related Gender
- Authors: Ahmed Sabir, Llu\'is Padr\'o
- Abstract summary: We investigate the impact of objects on gender bias in image captioning systems.
We propose a visual semantic-based gender score that measures the degree of bias and can be used as a plug-in for any image captioning system.
- Score: 1.4322753787990035
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we investigate the impact of objects on gender bias in image
captioning systems. Our results show that only gender-specific objects have a
strong gender bias (e.g., women-lipstick). In addition, we propose a visual
semantic-based gender score that measures the degree of bias and can be used as
a plug-in for any image captioning system. Our experiments demonstrate the
utility of the gender score, since we observe that our score can measure the
bias relation between a caption and its related gender; therefore, our score
can be used as an additional metric to the existing Object Gender Co-Occ
approach. Code and data are publicly available at
\url{https://github.com/ahmedssabir/GenderScore}.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing [72.0343083866144]
This paper introduces the GenderBias-emphVL benchmark to evaluate occupation-related gender bias in Large Vision-Language Models.
Using our benchmark, we extensively evaluate 15 commonly used open-source LVLMs and state-of-the-art commercial APIs.
Our findings reveal widespread gender biases in existing LVLMs.
arXiv Detail & Related papers (2024-06-30T05:55:15Z) - DiFair: A Benchmark for Disentangled Assessment of Gender Knowledge and
Bias [13.928591341824248]
Debiasing techniques have been proposed to mitigate the gender bias that is prevalent in pretrained language models.
These are often evaluated on datasets that check the extent to which the model is gender-neutral in its predictions.
This evaluation protocol overlooks the possible adverse impact of bias mitigation on useful gender knowledge.
arXiv Detail & Related papers (2023-10-22T15:27:16Z) - Analysing Gender Bias in Text-to-Image Models using Object Detection [0.0]
Using paired prompts that specify gender and vaguely reference an object we can examine whether certain objects are associated with a certain gender.
Male prompts generated objects such as ties, knives, trucks, baseball bats, and bicycles more frequently.
Female prompts were more likely to generate objects such as handbags, umbrellas, bowls, bottles, and cups.
arXiv Detail & Related papers (2023-07-16T12:31:29Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Auditing Gender Presentation Differences in Text-to-Image Models [54.16959473093973]
We study how gender is presented differently in text-to-image models.
By probing gender indicators in the input text, we quantify the frequency differences of presentation-centric attributes.
We propose an automatic method to estimate such differences.
arXiv Detail & Related papers (2023-02-07T18:52:22Z) - Much Ado About Gender: Current Practices and Future Recommendations for
Appropriate Gender-Aware Information Access [3.3903891679981593]
Information access research (and development) sometimes makes use of gender.
This work makes a variety of assumptions about gender that are not aligned with current understandings of what gender is.
Most papers we review rely on a binary notion of gender, even if they acknowledge that gender cannot be split into two categories.
arXiv Detail & Related papers (2023-01-12T01:21:02Z) - Gender Stereotype Reinforcement: Measuring the Gender Bias Conveyed by
Ranking Algorithms [68.85295025020942]
We propose the Gender Stereotype Reinforcement (GSR) measure, which quantifies the tendency of a Search Engines to support gender stereotypes.
GSR is the first specifically tailored measure for Information Retrieval, capable of quantifying representational harms.
arXiv Detail & Related papers (2020-09-02T20:45:04Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.