Deep Generative Views to Mitigate Gender Classification Bias Across
Gender-Race Groups
- URL: http://arxiv.org/abs/2208.08382v1
- Date: Wed, 17 Aug 2022 16:23:35 GMT
- Title: Deep Generative Views to Mitigate Gender Classification Bias Across
Gender-Race Groups
- Authors: Sreeraj Ramachandran and Ajita Rattani
- Abstract summary: We propose a bias mitigation strategy to improve classification accuracy and reduce bias across gender-racial groups.
We leverage the power of generative views, structured learning, and evidential learning towards mitigating gender classification bias.
- Score: 0.8594140167290097
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Published studies have suggested the bias of automated face-based gender
classification algorithms across gender-race groups. Specifically, unequal
accuracy rates were obtained for women and dark-skinned people. To mitigate the
bias of gender classifiers, the vision community has developed several
strategies. However, the efficacy of these mitigation strategies is
demonstrated for a limited number of races mostly, Caucasian and
African-American. Further, these strategies often offer a trade-off between
bias and classification accuracy. To further advance the state-of-the-art, we
leverage the power of generative views, structured learning, and evidential
learning towards mitigating gender classification bias. We demonstrate the
superiority of our bias mitigation strategy in improving classification
accuracy and reducing bias across gender-racial groups through extensive
experimental validation, resulting in state-of-the-art performance in intra-
and cross dataset evaluations.
Related papers
- The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models [58.130894823145205]
We center transgender, nonbinary, and other gender-diverse identities to investigate how alignment procedures interact with pre-existing gender-diverse bias.
Our findings reveal that DPO-aligned models are particularly sensitive to supervised finetuning.
We conclude with recommendations tailored to DPO and broader alignment practices.
arXiv Detail & Related papers (2024-11-06T06:50:50Z) - GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models [73.23743278545321]
Large language models (LLMs) have exhibited remarkable capabilities in natural language generation, but have also been observed to magnify societal biases.
GenderCARE is a comprehensive framework that encompasses innovative Criteria, bias Assessment, Reduction techniques, and Evaluation metrics.
arXiv Detail & Related papers (2024-08-22T15:35:46Z) - GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing [72.0343083866144]
This paper introduces the GenderBias-emphVL benchmark to evaluate occupation-related gender bias in Large Vision-Language Models.
Using our benchmark, we extensively evaluate 15 commonly used open-source LVLMs and state-of-the-art commercial APIs.
Our findings reveal widespread gender biases in existing LVLMs.
arXiv Detail & Related papers (2024-06-30T05:55:15Z) - Towards Fair Face Verification: An In-depth Analysis of Demographic
Biases [11.191375513738361]
Deep learning-based person identification and verification systems have remarkably improved in terms of accuracy in recent years.
However, such systems have been found to exhibit significant biases related to race, age, and gender.
This paper presents an in-depth analysis, with a particular emphasis on the intersectionality of these demographic factors.
arXiv Detail & Related papers (2023-07-19T14:49:14Z) - Counter-GAP: Counterfactual Bias Evaluation through Gendered Ambiguous
Pronouns [53.62845317039185]
Bias-measuring datasets play a critical role in detecting biased behavior of language models.
We propose a novel method to collect diverse, natural, and minimally distant text pairs via counterfactual generation.
We show that four pre-trained language models are significantly more inconsistent across different gender groups than within each group.
arXiv Detail & Related papers (2023-02-11T12:11:03Z) - Social Norm Bias: Residual Harms of Fairness-Aware Algorithms [21.50551404445654]
Social Norm Bias (SNoB) is a subtle but consequential type of discrimination that may be exhibited by automated decision-making systems.
We quantify SNoB by measuring how an algorithm's predictions are associated with conformity to gender norms.
We show that post-processing interventions do not mitigate this type of bias at all.
arXiv Detail & Related papers (2021-08-25T05:54:56Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Understanding Fairness of Gender Classification Algorithms Across
Gender-Race Groups [0.8594140167290097]
The aim of this paper is to investigate the differential performance of the gender classification algorithms across gender-race groups.
For all the algorithms used, Black females (Black race in general) always obtained the least accuracy rates.
Middle Eastern males and Latino females obtained higher accuracy rates most of the time.
arXiv Detail & Related papers (2020-09-24T04:56:10Z) - Gender Classification and Bias Mitigation in Facial Images [7.438105108643341]
Recent research showed that algorithms trained on biased benchmark databases could result in algorithmic bias.
We conducted surveys on existing benchmark databases for facial recognition and gender classification tasks.
We worked to increase classification accuracy and mitigate algorithmic biases on our baseline model trained on the augmented benchmark database.
arXiv Detail & Related papers (2020-07-13T01:09:06Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.