ABCFair: an Adaptable Benchmark approach for Comparing Fairness Methods
- URL: http://arxiv.org/abs/2409.16965v2
- Date: Mon, 21 Oct 2024 20:35:36 GMT
- Title: ABCFair: an Adaptable Benchmark approach for Comparing Fairness Methods
- Authors: MaryBeth Defrance, Maarten Buyl, Tijl De Bie,
- Abstract summary: We introduce ABCFair, a benchmark approach which allows adapting to the desiderata of the real-world problem setting.
We apply ABCFair to a range of pre-, in-, and postprocessing methods on both large-scale, traditional datasets and on a dual label (biased and unbiased) dataset.
- Score: 12.774108753281809
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Numerous methods have been implemented that pursue fairness with respect to sensitive features by mitigating biases in machine learning. Yet, the problem settings that each method tackles vary significantly, including the stage of intervention, the composition of sensitive features, the fairness notion, and the distribution of the output. Even in binary classification, these subtle differences make it highly complicated to benchmark fairness methods, as their performance can strongly depend on exactly how the bias mitigation problem was originally framed. Hence, we introduce ABCFair, a benchmark approach which allows adapting to the desiderata of the real-world problem setting, enabling proper comparability between methods for any use case. We apply ABCFair to a range of pre-, in-, and postprocessing methods on both large-scale, traditional datasets and on a dual label (biased and unbiased) dataset to sidestep the fairness-accuracy trade-off.
Related papers
- Different Horses for Different Courses: Comparing Bias Mitigation Algorithms in ML [9.579645248339004]
We show significant variance in fairness achieved by several algorithms and the influence of the learning pipeline on fairness scores.
We highlight that most bias mitigation techniques can achieve comparable performance.
We hope our work encourages future research on how various choices in the lifecycle of developing an algorithm impact fairness.
arXiv Detail & Related papers (2024-11-17T15:17:08Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - Learning Fair Classifiers via Min-Max F-divergence Regularization [13.81078324883519]
We introduce a novel min-max F-divergence regularization framework for learning fair classification models.
We show that F-divergence measures possess convexity and differentiability properties.
We show that the proposed framework achieves state-of-the-art performance with respect to the trade-off between accuracy and fairness.
arXiv Detail & Related papers (2023-06-28T20:42:04Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - Improving Fair Training under Correlation Shifts [33.385118640843416]
In particular, when the bias between labels and sensitive groups changes, the fairness of the trained model is directly influenced and can worsen.
We analytically show that existing in-processing fair algorithms have fundamental limits in accuracy and group fairness.
We propose a novel pre-processing step that samples the input data to reduce correlation shifts.
arXiv Detail & Related papers (2023-02-05T07:23:35Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Normalise for Fairness: A Simple Normalisation Technique for Fairness in Regression Machine Learning Problems [46.93320580613236]
We present a simple, yet effective method based on normalisation (FaiReg) for regression problems.
We compare it with two standard methods for fairness, namely data balancing and adversarial training.
The results show the superior performance of diminishing the effects of unfairness better than data balancing.
arXiv Detail & Related papers (2022-02-02T12:26:25Z) - One-vs.-One Mitigation of Intersectional Bias: A General Method to
Extend Fairness-Aware Binary Classification [0.48733623015338234]
One-vs.-One Mitigation is a process of comparison between each pair of subgroups related to sensitive attributes to the fairness-aware machine learning for binary classification.
Our method mitigates the intersectional bias much better than conventional methods in all the settings.
arXiv Detail & Related papers (2020-10-26T11:35:39Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.