Gender Classification and Bias Mitigation in Facial Images
- URL: http://arxiv.org/abs/2007.06141v1
- Date: Mon, 13 Jul 2020 01:09:06 GMT
- Title: Gender Classification and Bias Mitigation in Facial Images
- Authors: Wenying Wu, Pavlos Protopapas, Zheng Yang, Panagiotis Michalatos
- Abstract summary: Recent research showed that algorithms trained on biased benchmark databases could result in algorithmic bias.
We conducted surveys on existing benchmark databases for facial recognition and gender classification tasks.
We worked to increase classification accuracy and mitigate algorithmic biases on our baseline model trained on the augmented benchmark database.
- Score: 7.438105108643341
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gender classification algorithms have important applications in many domains
today such as demographic research, law enforcement, as well as human-computer
interaction. Recent research showed that algorithms trained on biased benchmark
databases could result in algorithmic bias. However, to date, little research
has been carried out on gender classification algorithms' bias towards gender
minorities subgroups, such as the LGBTQ and the non-binary population, who have
distinct characteristics in gender expression. In this paper, we began by
conducting surveys on existing benchmark databases for facial recognition and
gender classification tasks. We discovered that the current benchmark databases
lack representation of gender minority subgroups. We worked on extending the
current binary gender classifier to include a non-binary gender class. We did
that by assembling two new facial image databases: 1) a racially balanced
inclusive database with a subset of LGBTQ population 2) an inclusive-gender
database that consists of people with non-binary gender. We worked to increase
classification accuracy and mitigate algorithmic biases on our baseline model
trained on the augmented benchmark database. Our ensemble model has achieved an
overall accuracy score of 90.39%, which is a 38.72% increase from the baseline
binary gender classifier trained on Adience. While this is an initial attempt
towards mitigating bias in gender classification, more work is needed in
modeling gender as a continuum by assembling more inclusive databases.
Related papers
- GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models [73.23743278545321]
Large language models (LLMs) have exhibited remarkable capabilities in natural language generation, but have also been observed to magnify societal biases.
GenderCARE is a comprehensive framework that encompasses innovative Criteria, bias Assessment, Reduction techniques, and Evaluation metrics.
arXiv Detail & Related papers (2024-08-22T15:35:46Z) - Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing [72.0343083866144]
This paper introduces the GenderBias-emphVL benchmark to evaluate occupation-related gender bias in Large Vision-Language Models.
Using our benchmark, we extensively evaluate 15 commonly used open-source LVLMs and state-of-the-art commercial APIs.
Our findings reveal widespread gender biases in existing LVLMs.
arXiv Detail & Related papers (2024-06-30T05:55:15Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Deep Generative Views to Mitigate Gender Classification Bias Across
Gender-Race Groups [0.8594140167290097]
We propose a bias mitigation strategy to improve classification accuracy and reduce bias across gender-racial groups.
We leverage the power of generative views, structured learning, and evidential learning towards mitigating gender classification bias.
arXiv Detail & Related papers (2022-08-17T16:23:35Z) - Gendered Language in Resumes and its Implications for Algorithmic Bias
in Hiring [0.0]
We train a series of models to classify the gender of the applicant.
We investigate whether it is possible to obfuscate gender from resumes.
We find that there is a significant amount of gendered information in resumes even after obfuscation.
arXiv Detail & Related papers (2021-12-16T14:26:36Z) - Social Norm Bias: Residual Harms of Fairness-Aware Algorithms [21.50551404445654]
Social Norm Bias (SNoB) is a subtle but consequential type of discrimination that may be exhibited by automated decision-making systems.
We quantify SNoB by measuring how an algorithm's predictions are associated with conformity to gender norms.
We show that post-processing interventions do not mitigate this type of bias at all.
arXiv Detail & Related papers (2021-08-25T05:54:56Z) - Understanding Fairness of Gender Classification Algorithms Across
Gender-Race Groups [0.8594140167290097]
The aim of this paper is to investigate the differential performance of the gender classification algorithms across gender-race groups.
For all the algorithms used, Black females (Black race in general) always obtained the least accuracy rates.
Middle Eastern males and Latino females obtained higher accuracy rates most of the time.
arXiv Detail & Related papers (2020-09-24T04:56:10Z) - Gender Stereotype Reinforcement: Measuring the Gender Bias Conveyed by
Ranking Algorithms [68.85295025020942]
We propose the Gender Stereotype Reinforcement (GSR) measure, which quantifies the tendency of a Search Engines to support gender stereotypes.
GSR is the first specifically tailored measure for Information Retrieval, capable of quantifying representational harms.
arXiv Detail & Related papers (2020-09-02T20:45:04Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.