Bayes-Optimal Fair Classification with Linear Disparity Constraints via
Pre-, In-, and Post-processing
- URL: http://arxiv.org/abs/2402.02817v2
- Date: Tue, 6 Feb 2024 07:02:16 GMT
- Title: Bayes-Optimal Fair Classification with Linear Disparity Constraints via
Pre-, In-, and Post-processing
- Authors: Xianli Zeng, Guang Cheng and Edgar Dobriban
- Abstract summary: We develop methods for Bayes-optimal fair classification, aiming to minimize classification error subject to given group fairness constraints.
We show that several popular disparity measures -- the deviations from demographic parity, equality of opportunity, and predictive equality -- are bilinear.
Our methods control disparity directly while achieving near-optimal fairness-accuracy tradeoffs.
- Score: 32.5214395114507
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning algorithms may have disparate impacts on protected groups.
To address this, we develop methods for Bayes-optimal fair classification,
aiming to minimize classification error subject to given group fairness
constraints. We introduce the notion of \emph{linear disparity measures}, which
are linear functions of a probabilistic classifier; and \emph{bilinear
disparity measures}, which are also linear in the group-wise regression
functions. We show that several popular disparity measures -- the deviations
from demographic parity, equality of opportunity, and predictive equality --
are bilinear.
We find the form of Bayes-optimal fair classifiers under a single linear
disparity measure, by uncovering a connection with the Neyman-Pearson lemma.
For bilinear disparity measures, Bayes-optimal fair classifiers become
group-wise thresholding rules. Our approach can also handle multiple fairness
constraints (such as equalized odds), and the common scenario when the
protected attribute cannot be used at the prediction phase.
Leveraging our theoretical results, we design methods that learn fair
Bayes-optimal classifiers under bilinear disparity constraints. Our methods
cover three popular approaches to fairness-aware classification, via
pre-processing (Fair Up- and Down-Sampling), in-processing (Fair Cost-Sensitive
Classification) and post-processing (a Fair Plug-In Rule). Our methods control
disparity directly while achieving near-optimal fairness-accuracy tradeoffs. We
show empirically that our methods compare favorably to existing algorithms.
Related papers
- Optimal Group Fair Classifiers from Linear Post-Processing [10.615965454674901]
We propose a post-processing algorithm for fair classification that mitigates model bias under a unified family of group fairness criteria.
It achieves fairness by re-calibrating the output score of the given base model with a "fairness cost" -- a linear combination of the (predicted) group memberships.
arXiv Detail & Related papers (2024-05-07T05:58:44Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Domain Adaptation meets Individual Fairness. And they get along [48.95808607591299]
We show that algorithmic fairness interventions can help machine learning models overcome distribution shifts.
In particular, we show that enforcing suitable notions of individual fairness (IF) can improve the out-of-distribution accuracy of ML models.
arXiv Detail & Related papers (2022-05-01T16:19:55Z) - Repairing Regressors for Fair Binary Classification at Any Decision
Threshold [8.322348511450366]
We show that we can increase fair performance across all thresholds at once.
We introduce a formal measure of Distributional Parity, which captures the degree of similarity in the distributions of classifications for different protected groups.
Our main result is to put forward a novel post-processing algorithm based on optimal transport, which provably maximizes Distributional Parity.
arXiv Detail & Related papers (2022-03-14T20:53:35Z) - Group-Aware Threshold Adaptation for Fair Classification [9.496524884855557]
We introduce a novel post-processing method to optimize over multiple fairness constraints.
Our method theoretically enables a better upper bound in near optimality than existing method under same condition.
Experimental results demonstrate that our method outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-11-08T04:36:37Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Fairness with Overlapping Groups [15.154984899546333]
A standard goal is to ensure the equality of fairness metrics across multiple overlapping groups simultaneously.
We reconsider this standard fair classification problem using a probabilistic population analysis.
Our approach unifies a variety of existing group-fair classification methods and enables extensions to a wide range of non-decomposable multiclass performance metrics and fairness measures.
arXiv Detail & Related papers (2020-06-24T05:01:10Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Fair Classification via Unconstrained Optimization [0.0]
We show that the Bayes optimal fair learning rule remains a group-wise thresholding rule over the Bayes regressor.
The proposed algorithm can be applied to any black-box machine learning model.
arXiv Detail & Related papers (2020-05-21T11:29:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.