Model-Agnostic Gender Debiased Image Captioning
- URL: http://arxiv.org/abs/2304.03693v3
- Date: Thu, 21 Dec 2023 06:44:25 GMT
- Title: Model-Agnostic Gender Debiased Image Captioning
- Authors: Yusuke Hirota, Yuta Nakashima, Noa Garcia
- Abstract summary: Image captioning models are known to perpetuate and amplify harmful societal bias in the training set.
We propose a framework, called LIBRA, that learns from synthetically biased samples to decrease both types of biases.
- Score: 29.640940966944697
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image captioning models are known to perpetuate and amplify harmful societal
bias in the training set. In this work, we aim to mitigate such gender bias in
image captioning models. While prior work has addressed this problem by forcing
models to focus on people to reduce gender misclassification, it conversely
generates gender-stereotypical words at the expense of predicting the correct
gender. From this observation, we hypothesize that there are two types of
gender bias affecting image captioning models: 1) bias that exploits context to
predict gender, and 2) bias in the probability of generating certain (often
stereotypical) words because of gender. To mitigate both types of gender
biases, we propose a framework, called LIBRA, that learns from synthetically
biased samples to decrease both types of biases, correcting gender
misclassification and changing gender-stereotypical words to more neutral ones.
Code is available at https://github.com/rebnej/LIBRA.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - Are Models Biased on Text without Gender-related Language? [14.931375031931386]
We introduce UnStereoEval (USE), a novel framework for investigating gender bias in stereotype-free scenarios.
USE defines a sentence-level score based on pretraining data statistics to determine if the sentence contain minimal word-gender associations.
We find low fairness across all 28 tested models, suggesting that bias does not solely stem from the presence of gender-related words.
arXiv Detail & Related papers (2024-05-01T15:51:15Z) - The Male CEO and the Female Assistant: Evaluation and Mitigation of Gender Biases in Text-To-Image Generation of Dual Subjects [58.27353205269664]
We propose the Paired Stereotype Test (PST) framework, which queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
PST queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
Using PST, we evaluate two aspects of gender biases -- the well-known bias in gendered occupation and a novel aspect: bias in organizational power.
arXiv Detail & Related papers (2024-02-16T21:32:27Z) - Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender
Perturbation over Fairytale Texts [87.62403265382734]
Recent studies show that traditional fairytales are rife with harmful gender biases.
This work aims to assess learned biases of language models by evaluating their robustness against gender perturbations.
arXiv Detail & Related papers (2023-10-16T22:25:09Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Don't Forget About Pronouns: Removing Gender Bias in Language Models
Without Losing Factual Gender Information [4.391102490444539]
We focus on two types of such signals in English texts: factual gender information and gender bias.
We aim to diminish the stereotypical bias in the representations while preserving the factual gender signal.
arXiv Detail & Related papers (2022-06-21T21:38:25Z) - Identifying and Mitigating Gender Bias in Hyperbolic Word Embeddings [34.378806636170616]
We extend the study of gender bias to the recently popularized hyperbolic word embeddings.
We propose gyrocosine bias, a novel measure for quantifying gender bias in hyperbolic word representations.
Experiments on a suit of evaluation tests show that Poincar'e Gender Debias (PGD) effectively reduces bias while adding a minimal semantic offset.
arXiv Detail & Related papers (2021-09-28T14:43:37Z) - Stereotype and Skew: Quantifying Gender Bias in Pre-trained and
Fine-tuned Language Models [5.378664454650768]
This paper proposes two intuitive metrics, skew and stereotype, that quantify and analyse the gender bias present in contextual language models.
We find evidence that gender stereotype correlates approximately negatively with gender skew in out-of-the-box models, suggesting that there is a trade-off between these two forms of bias.
arXiv Detail & Related papers (2021-01-24T10:57:59Z) - Mitigating Gender Bias in Captioning Systems [56.25457065032423]
Most captioning models learn gender bias, leading to high gender prediction errors, especially for women.
We propose a new Guided Attention Image Captioning model (GAIC) which provides self-guidance on visual attention to encourage the model to capture correct gender visual evidence.
arXiv Detail & Related papers (2020-06-15T12:16:19Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.