Rawlsian Fair Adaptation of Deep Learning Classifiers
- URL: http://arxiv.org/abs/2105.14890v1
- Date: Mon, 31 May 2021 11:31:30 GMT
- Title: Rawlsian Fair Adaptation of Deep Learning Classifiers
- Authors: Kulin Shah, Pooja Gupta, Amit Deshpande, Chiranjib Bhattacharyya
- Abstract summary: Group-fairness in classification aims for equality of a predictive utility across different sensitive sub-populations, e.g., race or gender.
This paper shows that a Rawls classifier minimizes the error rate on the worst-off sensitive sub-population.
Our empirical results show significant improvement over state-of-the-art group-fair algorithms, even without retraining for fairness.
- Score: 18.150327860396786
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Group-fairness in classification aims for equality of a predictive utility
across different sensitive sub-populations, e.g., race or gender. Equality or
near-equality constraints in group-fairness often worsen not only the aggregate
utility but also the utility for the least advantaged sub-population. In this
paper, we apply the principles of Pareto-efficiency and least-difference to the
utility being accuracy, as an illustrative example, and arrive at the Rawls
classifier that minimizes the error rate on the worst-off sensitive
sub-population. Our mathematical characterization shows that the Rawls
classifier uniformly applies a threshold to an ideal score of features, in the
spirit of fair equality of opportunity. In practice, such a score or a feature
representation is often computed by a black-box model that has been useful but
unfair. Our second contribution is practical Rawlsian fair adaptation of any
given black-box deep learning model, without changing the score or feature
representation it computes. Given any score function or feature representation
and only its second-order statistics on the sensitive sub-populations, we seek
a threshold classifier on the given score or a linear threshold classifier on
the given feature representation that achieves the Rawls error rate restricted
to this hypothesis class. Our technical contribution is to formulate the above
problems using ambiguous chance constraints, and to provide efficient
algorithms for Rawlsian fair adaptation, along with provable upper bounds on
the Rawls error rate. Our empirical results show significant improvement over
state-of-the-art group-fair algorithms, even without retraining for fairness.
Related papers
- Optimal Group Fair Classifiers from Linear Post-Processing [10.615965454674901]
We propose a post-processing algorithm for fair classification that mitigates model bias under a unified family of group fairness criteria.
It achieves fairness by re-calibrating the output score of the given base model with a "fairness cost" -- a linear combination of the (predicted) group memberships.
arXiv Detail & Related papers (2024-05-07T05:58:44Z) - Bayes-Optimal Fair Classification with Linear Disparity Constraints via
Pre-, In-, and Post-processing [32.5214395114507]
We develop methods for Bayes-optimal fair classification, aiming to minimize classification error subject to given group fairness constraints.
We show that several popular disparity measures -- the deviations from demographic parity, equality of opportunity, and predictive equality -- are bilinear.
Our methods control disparity directly while achieving near-optimal fairness-accuracy tradeoffs.
arXiv Detail & Related papers (2024-02-05T08:59:47Z) - Deep Imbalanced Regression via Hierarchical Classification Adjustment [50.19438850112964]
Regression tasks in computer vision are often formulated into classification by quantizing the target space into classes.
The majority of training samples lie in a head range of target values, while a minority of samples span a usually larger tail range.
We propose to construct hierarchical classifiers for solving imbalanced regression tasks.
Our novel hierarchical classification adjustment (HCA) for imbalanced regression shows superior results on three diverse tasks.
arXiv Detail & Related papers (2023-10-26T04:54:39Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Correcting Underrepresentation and Intersectional Bias for Classification [49.1574468325115]
We consider the problem of learning from data corrupted by underrepresentation bias.
We show that with a small amount of unbiased data, we can efficiently estimate the group-wise drop-out rates.
We show that our algorithm permits efficient learning for model classes of finite VC dimension.
arXiv Detail & Related papers (2023-06-19T18:25:44Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Fair and Optimal Classification via Post-Processing [10.163721748735801]
This paper provides a complete characterization of the inherent tradeoff of demographic parity on classification problems.
We show that the minimum error rate achievable by randomized and attribute-aware fair classifiers is given by the optimal value of a Wasserstein-barycenter problem.
arXiv Detail & Related papers (2022-11-03T00:04:04Z) - Fairness via Adversarial Attribute Neighbourhood Robust Learning [49.93775302674591]
We propose a principled underlineRobust underlineAdversarial underlineAttribute underlineNeighbourhood (RAAN) loss to debias the classification head.
arXiv Detail & Related papers (2022-10-12T23:39:28Z) - Fairness-Aware Naive Bayes Classifier for Data with Multiple Sensitive
Features [0.0]
We generalise two-naive-Bayes (2NB) into N-naive-Bayes (NNB) to eliminate the simplification of assuming only two sensitive groups in the data.
We investigate its application on data with multiple sensitive features and propose a new constraint and post-processing routine to enforce differential fairness.
arXiv Detail & Related papers (2022-02-23T13:32:21Z) - Deconfounding Scores: Feature Representations for Causal Effect
Estimation with Weak Overlap [140.98628848491146]
We introduce deconfounding scores, which induce better overlap without biasing the target of estimation.
We show that deconfounding scores satisfy a zero-covariance condition that is identifiable in observed data.
In particular, we show that this technique could be an attractive alternative to standard regularizations.
arXiv Detail & Related papers (2021-04-12T18:50:11Z) - Fair Classification via Unconstrained Optimization [0.0]
We show that the Bayes optimal fair learning rule remains a group-wise thresholding rule over the Bayes regressor.
The proposed algorithm can be applied to any black-box machine learning model.
arXiv Detail & Related papers (2020-05-21T11:29:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.