Bayes-Optimal Classifiers under Group Fairness
- URL: http://arxiv.org/abs/2202.09724v5
- Date: Tue, 6 Feb 2024 08:38:09 GMT
- Title: Bayes-Optimal Classifiers under Group Fairness
- Authors: Xianli Zeng and Edgar Dobriban and Guang Cheng
- Abstract summary: This paper provides a unified framework for deriving Bayes-optimal classifiers under group fairness.
We propose a group-based thresholding method we call FairBayes, that can directly control disparity and achieve an essentially optimal fairness-accuracy tradeoff.
- Score: 32.52143951145071
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning algorithms are becoming integrated into more and more
high-stakes decision-making processes, such as in social welfare issues. Due to
the need of mitigating the potentially disparate impacts from algorithmic
predictions, many approaches have been proposed in the emerging area of fair
machine learning. However, the fundamental problem of characterizing
Bayes-optimal classifiers under various group fairness constraints has only
been investigated in some special cases. Based on the classical Neyman-Pearson
argument (Neyman and Pearson, 1933; Shao, 2003) for optimal hypothesis testing,
this paper provides a unified framework for deriving Bayes-optimal classifiers
under group fairness. This enables us to propose a group-based thresholding
method we call FairBayes, that can directly control disparity, and achieve an
essentially optimal fairness-accuracy tradeoff. These advantages are supported
by thorough experiments.
Related papers
- Finite-Sample and Distribution-Free Fair Classification: Optimal Trade-off Between Excess Risk and Fairness, and the Cost of Group-Blindness [14.421493372559762]
We quantify the impact of enforcing algorithmic fairness and group-blindness in binary classification under group fairness constraints.
We propose a unified framework for fair classification that provides distribution-free and finite-sample fairness guarantees with controlled excess risk.
arXiv Detail & Related papers (2024-10-21T20:04:17Z) - Bayes-Optimal Fair Classification with Linear Disparity Constraints via
Pre-, In-, and Post-processing [32.5214395114507]
We develop methods for Bayes-optimal fair classification, aiming to minimize classification error subject to given group fairness constraints.
We show that several popular disparity measures -- the deviations from demographic parity, equality of opportunity, and predictive equality -- are bilinear.
Our methods control disparity directly while achieving near-optimal fairness-accuracy tradeoffs.
arXiv Detail & Related papers (2024-02-05T08:59:47Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods [84.1077756698332]
This paper introduces the Fair Fairness Benchmark (textsfFFB), a benchmarking framework for in-processing group fairness methods.
We provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness.
arXiv Detail & Related papers (2023-06-15T19:51:28Z) - Fair Bayes-Optimal Classifiers Under Predictive Parity [33.648053823193855]
This paper considers predictive parity, which requires equalizing the probability of success given a positive prediction among different protected groups.
We propose an algorithm we call FairBayes-DPP, aiming to ensure predictive parity when our condition is satisfied.
arXiv Detail & Related papers (2022-05-15T04:58:10Z) - Fairness with Overlapping Groups [15.154984899546333]
A standard goal is to ensure the equality of fairness metrics across multiple overlapping groups simultaneously.
We reconsider this standard fair classification problem using a probabilistic population analysis.
Our approach unifies a variety of existing group-fair classification methods and enables extensions to a wide range of non-decomposable multiclass performance metrics and fairness measures.
arXiv Detail & Related papers (2020-06-24T05:01:10Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Provable tradeoffs in adversarially robust classification [96.48180210364893]
We develop and leverage new tools, including recent breakthroughs from probability theory on robust isoperimetry.
Our results reveal fundamental tradeoffs between standard and robust accuracy that grow when data is imbalanced.
arXiv Detail & Related papers (2020-06-09T09:58:19Z) - Learning the Truth From Only One Side of the Story [58.65439277460011]
We focus on generalized linear models and show that without adjusting for this sampling bias, the model may converge suboptimally or even fail to converge to the optimal solution.
We propose an adaptive approach that comes with theoretical guarantees and show that it outperforms several existing methods empirically.
arXiv Detail & Related papers (2020-06-08T18:20:28Z) - Fair Classification via Unconstrained Optimization [0.0]
We show that the Bayes optimal fair learning rule remains a group-wise thresholding rule over the Bayes regressor.
The proposed algorithm can be applied to any black-box machine learning model.
arXiv Detail & Related papers (2020-05-21T11:29:05Z) - Robust Optimization for Fairness with Noisy Protected Groups [85.13255550021495]
We study the consequences of naively relying on noisy protected group labels.
We introduce two new approaches using robust optimization.
We show that the robust approaches achieve better true group fairness guarantees than the naive approach.
arXiv Detail & Related papers (2020-02-21T14:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.