Metrics and methods for a systematic comparison of fairness-aware
machine learning algorithms
- URL: http://arxiv.org/abs/2010.03986v1
- Date: Thu, 8 Oct 2020 13:58:09 GMT
- Title: Metrics and methods for a systematic comparison of fairness-aware
machine learning algorithms
- Authors: Gareth P. Jones, James M. Hickey, Pietro G. Di Stefano, Charanpal
Dhanjal, Laura C. Stoddart and Vlasios Vasileiou
- Abstract summary: This study is the most comprehensive of its kind.
It considers fairness, predictive-performance, calibration quality, and speed of 28 different modelling pipelines.
We also found that fairness-aware algorithms can induce fairness without material drops in predictive power.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding and removing bias from the decisions made by machine learning
models is essential to avoid discrimination against unprivileged groups.
Despite recent progress in algorithmic fairness, there is still no clear answer
as to which bias-mitigation approaches are most effective. Evaluation
strategies are typically use-case specific, rely on data with unclear bias, and
employ a fixed policy to convert model outputs to decision outcomes. To address
these problems, we performed a systematic comparison of a number of popular
fairness algorithms applicable to supervised classification. Our study is the
most comprehensive of its kind. It utilizes three real and four synthetic
datasets, and two different ways of converting model outputs to decisions. It
considers fairness, predictive-performance, calibration quality, and speed of
28 different modelling pipelines, corresponding to both fairness-unaware and
fairness-aware algorithms. We found that fairness-unaware algorithms typically
fail to produce adequately fair models and that the simplest algorithms are not
necessarily the fairest ones. We also found that fairness-aware algorithms can
induce fairness without material drops in predictive power. Finally, we found
that dataset idiosyncracies (e.g., degree of intrinsic unfairness, nature of
correlations) do affect the performance of fairness-aware approaches. Our
results allow the practitioner to narrow down the approach(es) they would like
to adopt without having to know in advance their fairness requirements.
Related papers
- Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Can Ensembling Pre-processing Algorithms Lead to Better Machine Learning
Fairness? [8.679212948810916]
Several fairness pre-processing algorithms are available to alleviate implicit biases during model training.
These algorithms employ different concepts of fairness, often leading to conflicting strategies with consequential trade-offs between fairness and accuracy.
We evaluate three popular fairness pre-processing algorithms and investigate the potential for combining all algorithms into a more robust pre-processing ensemble.
arXiv Detail & Related papers (2022-12-05T21:54:29Z) - Understanding Unfairness in Fraud Detection through Model and Data Bias
Interactions [4.159343412286401]
We argue that algorithmic unfairness stems from interactions between models and biases in the data.
We study a set of hypotheses regarding the fairness-accuracy trade-offs that fairness-blind ML algorithms exhibit under different data bias settings.
arXiv Detail & Related papers (2022-07-13T15:18:30Z) - Normalise for Fairness: A Simple Normalisation Technique for Fairness in Regression Machine Learning Problems [46.93320580613236]
We present a simple, yet effective method based on normalisation (FaiReg) for regression problems.
We compare it with two standard methods for fairness, namely data balancing and adversarial training.
The results show the superior performance of diminishing the effects of unfairness better than data balancing.
arXiv Detail & Related papers (2022-02-02T12:26:25Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - FAIRLEARN:Configurable and Interpretable Algorithmic Fairness [1.2183405753834557]
There is a need to mitigate any bias arising from either training samples or implicit assumptions made about the data samples.
Many approaches have been proposed to make learning algorithms fair by detecting and mitigating bias in different stages of optimization.
We propose the FAIRLEARN procedure that produces a fair algorithm by incorporating user constraints into the optimization procedure.
arXiv Detail & Related papers (2021-11-17T03:07:18Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Transparency Tools for Fairness in AI (Luskin) [12.158766675246337]
We propose new tools for assessing and correcting fairness and bias in AI algorithms.
The three tools are: - A new definition of fairness called "controlled fairness" with respect to choices of protected features and filters.
They are useful for understanding various dimensions of bias, and that in practice the algorithms are effective in starkly reducing a given observed bias when tested on new data.
arXiv Detail & Related papers (2020-07-09T00:21:54Z) - Do the Machine Learning Models on a Crowd Sourced Platform Exhibit Bias?
An Empirical Study on Model Fairness [7.673007415383724]
We have created a benchmark of 40 top-rated models from Kaggle used for 5 different tasks.
We have applied 7 mitigation techniques on these models and analyzed the fairness, mitigation results, and impacts on performance.
arXiv Detail & Related papers (2020-05-21T23:35:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.