FAE: A Fairness-Aware Ensemble Framework
- URL: http://arxiv.org/abs/2002.00695v1
- Date: Mon, 3 Feb 2020 13:05:18 GMT
- Title: FAE: A Fairness-Aware Ensemble Framework
- Authors: Vasileios Iosifidis, Besnik Fetahu, Eirini Ntoutsi
- Abstract summary: FAE (Fairness-Aware Ensemble) framework combines fairness-related interventions at both pre- and postprocessing steps of the data analysis process.
In the preprocessing step, we tackle the problems of under-representation of the protected group and of class-imbalance.
In the post-processing step, we tackle the problem of class overlapping by shifting the decision boundary in the direction of fairness.
- Score: 18.993049769711114
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated decision making based on big data and machine learning (ML)
algorithms can result in discriminatory decisions against certain protected
groups defined upon personal data like gender, race, sexual orientation etc.
Such algorithms designed to discover patterns in big data might not only pick
up any encoded societal biases in the training data, but even worse, they might
reinforce such biases resulting in more severe discrimination. The majority of
thus far proposed fairness-aware machine learning approaches focus solely on
the pre-, in- or post-processing steps of the machine learning process, that
is, input data, learning algorithms or derived models, respectively. However,
the fairness problem cannot be isolated to a single step of the ML process.
Rather, discrimination is often a result of complex interactions between big
data and algorithms, and therefore, a more holistic approach is required. The
proposed FAE (Fairness-Aware Ensemble) framework combines fairness-related
interventions at both pre- and postprocessing steps of the data analysis
process. In the preprocessing step, we tackle the problems of
under-representation of the protected group (group imbalance) and of
class-imbalance by generating balanced training samples. In the post-processing
step, we tackle the problem of class overlapping by shifting the decision
boundary in the direction of fairness.
Related papers
- Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Can Ensembling Pre-processing Algorithms Lead to Better Machine Learning
Fairness? [8.679212948810916]
Several fairness pre-processing algorithms are available to alleviate implicit biases during model training.
These algorithms employ different concepts of fairness, often leading to conflicting strategies with consequential trade-offs between fairness and accuracy.
We evaluate three popular fairness pre-processing algorithms and investigate the potential for combining all algorithms into a more robust pre-processing ensemble.
arXiv Detail & Related papers (2022-12-05T21:54:29Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Understanding Unfairness in Fraud Detection through Model and Data Bias
Interactions [4.159343412286401]
We argue that algorithmic unfairness stems from interactions between models and biases in the data.
We study a set of hypotheses regarding the fairness-accuracy trade-offs that fairness-blind ML algorithms exhibit under different data bias settings.
arXiv Detail & Related papers (2022-07-13T15:18:30Z) - Towards A Holistic View of Bias in Machine Learning: Bridging
Algorithmic Fairness and Imbalanced Learning [8.602734307457387]
A key element in achieving algorithmic fairness with respect to protected groups is the simultaneous reduction of class and protected group imbalance in the underlying training data.
We propose a novel oversampling algorithm, Fair Oversampling, that addresses both skewed class distributions and protected features.
arXiv Detail & Related papers (2022-07-13T09:48:52Z) - Learning from Heterogeneous Data Based on Social Interactions over
Graphs [58.34060409467834]
This work proposes a decentralized architecture, where individual agents aim at solving a classification problem while observing streaming features of different dimensions.
We show that the.
strategy enables the agents to learn consistently under this highly-heterogeneous setting.
We show that the.
strategy enables the agents to learn consistently under this highly-heterogeneous setting.
arXiv Detail & Related papers (2021-12-17T12:47:18Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.