Post-hoc Bias Scoring Is Optimal For Fair Classification
- URL: http://arxiv.org/abs/2310.05725v3
- Date: Fri, 15 Mar 2024 15:09:24 GMT
- Title: Post-hoc Bias Scoring Is Optimal For Fair Classification
- Authors: Wenlong Chen, Yegor Klochkov, Yang Liu,
- Abstract summary: We introduce a novel instance-level measure of bias, which we call bias score, and the modification rule is a simple linear rule on top of the finite amount of bias scores.
In the case of DP and EOp constraints, the modification rule is thresholding a single bias score, while in the case of EO constraints we are required to fit a linear modification rule with 2 parameters.
- Score: 12.897626117694317
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We consider a binary classification problem under group fairness constraints, which can be one of Demographic Parity (DP), Equalized Opportunity (EOp), or Equalized Odds (EO). We propose an explicit characterization of Bayes optimal classifier under the fairness constraints, which turns out to be a simple modification rule of the unconstrained classifier. Namely, we introduce a novel instance-level measure of bias, which we call bias score, and the modification rule is a simple linear rule on top of the finite amount of bias scores.Based on this characterization, we develop a post-hoc approach that allows us to adapt to fairness constraints while maintaining high accuracy. In the case of DP and EOp constraints, the modification rule is thresholding a single bias score, while in the case of EO constraints we are required to fit a linear modification rule with 2 parameters. The method can also be applied for composite group-fairness criteria, such as ones involving several sensitive attributes.
Related papers
- Optimal Group Fair Classifiers from Linear Post-Processing [10.615965454674901]
We propose a post-processing algorithm for fair classification that mitigates model bias under a unified family of group fairness criteria.
It achieves fairness by re-calibrating the output score of the given base model with a "fairness cost" -- a linear combination of the (predicted) group memberships.
arXiv Detail & Related papers (2024-05-07T05:58:44Z) - Bayes-Optimal Fair Classification with Linear Disparity Constraints via
Pre-, In-, and Post-processing [32.5214395114507]
We develop methods for Bayes-optimal fair classification, aiming to minimize classification error subject to given group fairness constraints.
We show that several popular disparity measures -- the deviations from demographic parity, equality of opportunity, and predictive equality -- are bilinear.
Our methods control disparity directly while achieving near-optimal fairness-accuracy tradeoffs.
arXiv Detail & Related papers (2024-02-05T08:59:47Z) - On Regularization and Inference with Label Constraints [62.60903248392479]
We compare two strategies for encoding label constraints in a machine learning pipeline, regularization with constraints and constrained inference.
For regularization, we show that it narrows the generalization gap by precluding models that are inconsistent with the constraints.
For constrained inference, we show that it reduces the population risk by correcting a model's violation, and hence turns the violation into an advantage.
arXiv Detail & Related papers (2023-07-08T03:39:22Z) - Correcting Underrepresentation and Intersectional Bias for Classification [49.1574468325115]
We consider the problem of learning from data corrupted by underrepresentation bias.
We show that with a small amount of unbiased data, we can efficiently estimate the group-wise drop-out rates.
We show that our algorithm permits efficient learning for model classes of finite VC dimension.
arXiv Detail & Related papers (2023-06-19T18:25:44Z) - Mitigating Algorithmic Bias with Limited Annotations [65.060639928772]
When sensitive attributes are not disclosed or available, it is needed to manually annotate a small part of the training data to mitigate bias.
We propose Active Penalization Of Discrimination (APOD), an interactive framework to guide the limited annotations towards maximally eliminating the effect of algorithmic bias.
APOD shows comparable performance to fully annotated bias mitigation, which demonstrates that APOD could benefit real-world applications when sensitive information is limited.
arXiv Detail & Related papers (2022-07-20T16:31:19Z) - Interpretable and Fair Boolean Rule Sets via Column Generation [18.08486863429421]
An integer program is formulated to optimally trade classification accuracy for rule simplicity.
We consider the fairness setting and extend the formulation to include explicit constraints on two different measures of classification parity.
Compared to other fair and interpretable classifiers, our method is able to find rule sets that meet stricter notions of fairness with a modest trade-off in accuracy.
arXiv Detail & Related papers (2021-11-16T13:40:28Z) - When in Doubt: Improving Classification Performance with Alternating
Normalization [57.39356691967766]
We introduce Classification with Alternating Normalization (CAN), a non-parametric post-processing step for classification.
CAN improves classification accuracy for challenging examples by re-adjusting their predicted class probability distribution.
We empirically demonstrate its effectiveness across a diverse set of classification tasks.
arXiv Detail & Related papers (2021-09-28T02:55:42Z) - Evaluating Debiasing Techniques for Intersectional Biases [53.41549919978481]
bias is pervasive in NLP models, motivating the development of automatic debiasing techniques.
In this paper we argue that a truly fair model must consider gerrymandering' groups which comprise not only single attributes, but also intersectional groups.
arXiv Detail & Related papers (2021-09-21T22:01:28Z) - Fair Decision Rules for Binary Classification [0.0]
We consider the problem of building Boolean rule sets in disjunctive normal form (DNF)
We formulate the problem as an integer program that maximizes classification accuracy with explicit constraints on two different measures of classification parity.
Compared to other fair and interpretable classifiers, our method is able to find rule sets that meet stricter notions of fairness with a modest trade-off in accuracy.
arXiv Detail & Related papers (2021-07-03T02:32:17Z) - Rule Generation for Classification: Scalability, Interpretability, and Fairness [0.0]
We propose a new rule-based optimization method for classification with constraints.
We address interpretability and fairness by assigning cost coefficients to the rules and introducing additional constraints.
The proposed method exhibits a good compromise between local interpretability and fairness on the one side, and accuracy on the other side.
arXiv Detail & Related papers (2021-04-21T20:31:28Z) - GroupifyVAE: from Group-based Definition to VAE-based Unsupervised
Representation Disentanglement [91.9003001845855]
VAE-based unsupervised disentanglement can not be achieved without introducing other inductive bias.
We address VAE-based unsupervised disentanglement by leveraging the constraints derived from the Group Theory based definition as the non-probabilistic inductive bias.
We train 1800 models covering the most prominent VAE-based models on five datasets to verify the effectiveness of our method.
arXiv Detail & Related papers (2021-02-20T09:49:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.