Assessing Gender Bias in Predictive Algorithms using eXplainable AI
- URL: http://arxiv.org/abs/2203.10264v1
- Date: Sat, 19 Mar 2022 07:47:45 GMT
- Title: Assessing Gender Bias in Predictive Algorithms using eXplainable AI
- Authors: Cristina Manresa-Yee and Silvia Ramis
- Abstract summary: Predictive algorithms have a powerful potential to offer benefits in areas as varied as medicine or education.
They can inherit the bias and prejudices present in humans.
The outcomes can systematically repeat errors that create unfair results.
- Score: 1.9798034349981162
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predictive algorithms have a powerful potential to offer benefits in areas as
varied as medicine or education. However, these algorithms and the data they
use are built by humans, consequently, they can inherit the bias and prejudices
present in humans. The outcomes can systematically repeat errors that create
unfair results, which can even lead to situations of discrimination (e.g.
gender, social or racial). In order to illustrate how important is to count
with a diverse training dataset to avoid bias, we manipulate a well-known
facial expression recognition dataset to explore gender bias and discuss its
implications.
Related papers
- Mitigating Algorithmic Bias on Facial Expression Recognition [0.0]
Biased datasets are ubiquitous and present a challenge for machine learning.
The problem of biased datasets is especially sensitive when dealing with minority people groups.
This work explores one way to mitigate bias using a debiasing variational autoencoder with experiments on facial expression recognition.
arXiv Detail & Related papers (2023-12-23T17:41:30Z) - The Impact of Debiasing on the Performance of Language Models in
Downstream Tasks is Underestimated [70.23064111640132]
We compare the impact of debiasing on performance across multiple downstream tasks using a wide-range of benchmark datasets.
Experiments show that the effects of debiasing are consistently emphunderestimated across all tasks.
arXiv Detail & Related papers (2023-09-16T20:25:34Z) - Deep Learning on a Healthy Data Diet: Finding Important Examples for
Fairness [15.210232622716129]
Data-driven predictive solutions predominant in commercial applications tend to suffer from biases and stereotypes.
Data augmentation reduces gender bias by adding counterfactual examples to the training dataset.
We show that some of the examples in the augmented dataset can be not important or even harmful for fairness.
arXiv Detail & Related papers (2022-11-20T22:42:30Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - fairadapt: Causal Reasoning for Fair Data Pre-processing [2.1915057426589746]
This manuscript describes the R-package fairadapt, which implements a causal inference pre-processing method.
We discuss appropriate relaxations which assume certain causal pathways from the sensitive attribute to the outcome are not discriminatory.
arXiv Detail & Related papers (2021-10-19T18:48:28Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Towards causal benchmarking of bias in face analysis algorithms [54.19499274513654]
We develop an experimental method for measuring algorithmic bias of face analysis algorithms.
Our proposed method is based on generating synthetic transects'' of matched sample images.
We validate our method by comparing it to a study that employs the traditional observational method for analyzing bias in gender classification algorithms.
arXiv Detail & Related papers (2020-07-13T17:10:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.