Improving Fairness of AI Systems with Lossless De-biasing
- URL: http://arxiv.org/abs/2105.04534v1
- Date: Mon, 10 May 2021 17:38:38 GMT
- Title: Improving Fairness of AI Systems with Lossless De-biasing
- Authors: Yan Zhou, Murat Kantarcioglu, Chris Clifton
- Abstract summary: Mitigating bias in AI systems to increase overall fairness has emerged as an important challenge.
We present an information-lossless de-biasing technique that targets the scarcity of data in the disadvantaged group.
- Score: 15.039284892391565
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In today's society, AI systems are increasingly used to make critical
decisions such as credit scoring and patient triage. However, great convenience
brought by AI systems comes with troubling prevalence of bias against
underrepresented groups. Mitigating bias in AI systems to increase overall
fairness has emerged as an important challenge. Existing studies on mitigating
bias in AI systems focus on eliminating sensitive demographic information
embedded in data. Given the temporal and contextual complexity of
conceptualizing fairness, lossy treatment of demographic information may
contribute to an unnecessary trade-off between accuracy and fairness,
especially when demographic attributes and class labels are correlated. In this
paper, we present an information-lossless de-biasing technique that targets the
scarcity of data in the disadvantaged group. Unlike the existing work, we
demonstrate, both theoretically and empirically, that oversampling
underrepresented groups can not only mitigate algorithmic bias in AI systems
that consistently predict a favorable outcome for a certain group, but improve
overall accuracy by mitigating class imbalance within data that leads to a bias
towards the majority class. We demonstrate the effectiveness of our technique
on real datasets using a variety of fairness metrics.
Related papers
- Data quality dimensions for fair AI [0.0]
We consider the problem of bias in AI systems from the point of view of Information Quality dimensions.
We illustrate potential improvements of a bias mitigation tool in gender classification errors.
The identification of data quality dimensions to implement in bias mitigation tool may help achieve more fairness.
arXiv Detail & Related papers (2023-05-11T16:48:58Z) - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources,
Impacts, And Mitigation Strategies [11.323961700172175]
This survey paper offers a succinct, comprehensive overview of fairness and bias in AI.
We review sources of bias, such as data, algorithm, and human decision biases.
We assess the societal impact of biased AI systems, focusing on the perpetuation of inequalities and the reinforcement of harmful stereotypes.
arXiv Detail & Related papers (2023-04-16T03:23:55Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Understanding Unfairness in Fraud Detection through Model and Data Bias
Interactions [4.159343412286401]
We argue that algorithmic unfairness stems from interactions between models and biases in the data.
We study a set of hypotheses regarding the fairness-accuracy trade-offs that fairness-blind ML algorithms exhibit under different data bias settings.
arXiv Detail & Related papers (2022-07-13T15:18:30Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Supercharging Imbalanced Data Learning With Energy-based Contrastive
Representation Transfer [72.5190560787569]
In computer vision, learning from long tailed datasets is a recurring theme, especially for natural image datasets.
Our proposal posits a meta-distributional scenario, where the data generating mechanism is invariant across the label-conditional feature distributions.
This allows us to leverage a causal data inflation procedure to enlarge the representation of minority classes.
arXiv Detail & Related papers (2020-11-25T00:13:11Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z) - A survey of bias in Machine Learning through the prism of Statistical
Parity for the Adult Data Set [5.277804553312449]
We show the importance of understanding how a bias can be introduced into automatic decisions.
We first present a mathematical framework for the fair learning problem, specifically in the binary classification setting.
We then propose to quantify the presence of bias by using the standard Disparate Impact index on the real and well-known Adult income data set.
arXiv Detail & Related papers (2020-03-31T14:48:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.