Parity-based Cumulative Fairness-aware Boosting
- URL: http://arxiv.org/abs/2201.01148v1
- Date: Tue, 4 Jan 2022 14:16:36 GMT
- Title: Parity-based Cumulative Fairness-aware Boosting
- Authors: Vasileios Iosifidis, Arjun Roy, Eirini Ntoutsi
- Abstract summary: Data-driven AI systems can lead to discrimination on the basis of protected attributes like gender or race.
We propose AdaFair, a fairness-aware boosting ensemble that changes the data distribution at each round.
Our experiments show that our approach can achieve parity in terms of statistical parity, equal opportunity, and disparate mistreatment.
- Score: 7.824964622317634
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data-driven AI systems can lead to discrimination on the basis of protected
attributes like gender or race. One reason for this behavior is the encoded
societal biases in the training data (e.g., females are underrepresented),
which is aggravated in the presence of unbalanced class distributions (e.g.,
"granted" is the minority class). State-of-the-art fairness-aware machine
learning approaches focus on preserving the \emph{overall} classification
accuracy while improving fairness. In the presence of class-imbalance, such
methods may further aggravate the problem of discrimination by denying an
already underrepresented group (e.g., \textit{females}) the fundamental rights
of equal social privileges (e.g., equal credit opportunity).
To this end, we propose AdaFair, a fairness-aware boosting ensemble that
changes the data distribution at each round, taking into account not only the
class errors but also the fairness-related performance of the model defined
cumulatively based on the partial ensemble. Except for the in-training boosting
of the group discriminated over each round, AdaFair directly tackles imbalance
during the post-training phase by optimizing the number of ensemble learners
for balanced error performance (BER). AdaFair can facilitate different
parity-based fairness notions and mitigate effectively discriminatory outcomes.
Our experiments show that our approach can achieve parity in terms of
statistical parity, equal opportunity, and disparate mistreatment while
maintaining good predictive performance for all classes.
Related papers
- Improving Robust Fairness via Balance Adversarial Training [51.67643171193376]
Adversarial training (AT) methods are effective against adversarial attacks, yet they introduce severe disparity of accuracy and robustness between different classes.
We propose Adversarial Training (BAT) to address the robust fairness problem.
arXiv Detail & Related papers (2022-09-15T14:44:48Z) - Towards A Holistic View of Bias in Machine Learning: Bridging
Algorithmic Fairness and Imbalanced Learning [8.602734307457387]
A key element in achieving algorithmic fairness with respect to protected groups is the simultaneous reduction of class and protected group imbalance in the underlying training data.
We propose a novel oversampling algorithm, Fair Oversampling, that addresses both skewed class distributions and protected features.
arXiv Detail & Related papers (2022-07-13T09:48:52Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - FIFA: Making Fairness More Generalizable in Classifiers Trained on
Imbalanced Data [34.70704786008873]
We propose a theoretically-principled, yet Flexible approach that is Imbalance-Fairness-Aware (FIFA)
FIFA encourages both classification and fairness generalization and can be flexibly combined with many existing fair learning methods with logits-based losses.
We demonstrate the power of FIFA by combining it with a popular fair classification algorithm, and the resulting algorithm achieves significantly better fairness generalization on several real-world datasets.
arXiv Detail & Related papers (2022-06-06T04:39:25Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - FairBalance: How to Achieve Equalized Odds With Data Pre-processing [15.392349679172707]
This research seeks to benefit the software engineering society by providing a simple yet effective pre-processing approach to achieve equalized odds fairness in machine learning software.
We proposed FairBalance, a pre-processing algorithm which balances the class distribution in each demographic group by assigning calculated weights to the training data.
arXiv Detail & Related papers (2021-07-17T20:40:45Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z) - Fairness Constraints in Semi-supervised Learning [56.48626493765908]
We develop a framework for fair semi-supervised learning, which is formulated as an optimization problem.
We theoretically analyze the source of discrimination in semi-supervised learning via bias, variance and noise decomposition.
Our method is able to achieve fair semi-supervised learning, and reach a better trade-off between accuracy and fairness than fair supervised learning.
arXiv Detail & Related papers (2020-09-14T04:25:59Z) - Ensuring Fairness Beyond the Training Data [22.284777913437182]
We develop classifiers that are fair with respect to the training distribution and for a class of perturbations.
Based on online learning algorithm, we develop an iterative algorithm that converges to a fair and robust solution.
Our experiments show that there is an inherent trade-off between fairness and accuracy of such classifiers.
arXiv Detail & Related papers (2020-07-12T16:20:28Z) - Recovering from Biased Data: Can Fairness Constraints Improve Accuracy? [11.435833538081557]
Empirical Risk Minimization (ERM) may produce a classifier that not only is biased but also has suboptimal accuracy on the true data distribution.
We examine the ability of fairness-constrained ERM to correct this problem.
We also consider other recovery methods including reweighting the training data, Equalized Odds, and Demographic Parity.
arXiv Detail & Related papers (2019-12-02T22:00:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.