One-vs.-One Mitigation of Intersectional Bias: A General Method to
Extend Fairness-Aware Binary Classification
- URL: http://arxiv.org/abs/2010.13494v1
- Date: Mon, 26 Oct 2020 11:35:39 GMT
- Title: One-vs.-One Mitigation of Intersectional Bias: A General Method to
Extend Fairness-Aware Binary Classification
- Authors: Kenji Kobayashi, Yuri Nakao
- Abstract summary: One-vs.-One Mitigation is a process of comparison between each pair of subgroups related to sensitive attributes to the fairness-aware machine learning for binary classification.
Our method mitigates the intersectional bias much better than conventional methods in all the settings.
- Score: 0.48733623015338234
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the widespread adoption of machine learning in the real world, the
impact of the discriminatory bias has attracted attention. In recent years,
various methods to mitigate the bias have been proposed. However, most of them
have not considered intersectional bias, which brings unfair situations where
people belonging to specific subgroups of a protected group are treated worse
when multiple sensitive attributes are taken into consideration. To mitigate
this bias, in this paper, we propose a method called One-vs.-One Mitigation by
applying a process of comparison between each pair of subgroups related to
sensitive attributes to the fairness-aware machine learning for binary
classification. We compare our method and the conventional fairness-aware
binary classification methods in comprehensive settings using three approaches
(pre-processing, in-processing, and post-processing), six metrics (the ratio
and difference of demographic parity, equalized odds, and equal opportunity),
and two real-world datasets (Adult and COMPAS). As a result, our method
mitigates the intersectional bias much better than conventional methods in all
the settings. With the result, we open up the potential of fairness-aware
binary classification for solving more realistic problems occurring when there
are multiple sensitive attributes.
Related papers
- ABCFair: an Adaptable Benchmark approach for Comparing Fairness Methods [12.774108753281809]
We introduce ABCFair, a benchmark approach which allows adapting to the desiderata of the real-world problem setting.
We apply ABCFair to a range of pre-, in-, and postprocessing methods on both large-scale, traditional datasets and on a dual label (biased and unbiased) dataset.
arXiv Detail & Related papers (2024-09-25T14:26:07Z) - Bayes-Optimal Fair Classification with Linear Disparity Constraints via
Pre-, In-, and Post-processing [32.5214395114507]
We develop methods for Bayes-optimal fair classification, aiming to minimize classification error subject to given group fairness constraints.
We show that several popular disparity measures -- the deviations from demographic parity, equality of opportunity, and predictive equality -- are bilinear.
Our methods control disparity directly while achieving near-optimal fairness-accuracy tradeoffs.
arXiv Detail & Related papers (2024-02-05T08:59:47Z) - When mitigating bias is unfair: multiplicity and arbitrariness in algorithmic group fairness [8.367620276482056]
We introduce the FRAME (FaiRness Arbitrariness and Multiplicity Evaluation) framework, which evaluates bias mitigation through five dimensions.
Applying FRAME to various bias mitigation approaches across key datasets allows us to exhibit significant differences in the behaviors of debiasing methods.
These findings highlight the limitations of current fairness criteria and the inherent arbitrariness in the debiasing process.
arXiv Detail & Related papers (2023-02-14T16:53:52Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Optimising Equal Opportunity Fairness in Model Training [60.0947291284978]
Existing debiasing methods, such as adversarial training and removing protected information from representations, have been shown to reduce bias.
We propose two novel training objectives which directly optimise for the widely-used criterion of it equal opportunity, and show that they are effective in reducing bias while maintaining high performance over two classification tasks.
arXiv Detail & Related papers (2022-05-05T01:57:58Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Social Norm Bias: Residual Harms of Fairness-Aware Algorithms [21.50551404445654]
Social Norm Bias (SNoB) is a subtle but consequential type of discrimination that may be exhibited by automated decision-making systems.
We quantify SNoB by measuring how an algorithm's predictions are associated with conformity to gender norms.
We show that post-processing interventions do not mitigate this type of bias at all.
arXiv Detail & Related papers (2021-08-25T05:54:56Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.