Fairness and Unfairness in Binary and Multiclass Classification: Quantifying, Calculating, and Bounding
- URL: http://arxiv.org/abs/2206.03234v2
- Date: Fri, 5 Apr 2024 18:00:01 GMT
- Title: Fairness and Unfairness in Binary and Multiclass Classification: Quantifying, Calculating, and Bounding
- Authors: Sivan Sabato, Eran Treister, Elad Yom-Tov,
- Abstract summary: We propose a new interpretable measure of unfairness, that allows providing a quantitative analysis of classifier fairness.
We show how this measure can be calculated when the classifier's conditional confusion matrices are known.
We report experiments on data sets representing diverse applications.
- Score: 22.449347663780767
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new interpretable measure of unfairness, that allows providing a quantitative analysis of classifier fairness, beyond a dichotomous fair/unfair distinction. We show how this measure can be calculated when the classifier's conditional confusion matrices are known. We further propose methods for auditing classifiers for their fairness when the confusion matrices cannot be obtained or even estimated. Our approach lower-bounds the unfairness of a classifier based only on aggregate statistics, which may be provided by the owner of the classifier or collected from freely available data. We use the equalized odds criterion, which we generalize to the multiclass case. We report experiments on data sets representing diverse applications, which demonstrate the effectiveness and the wide range of possible uses of the proposed methodology. An implementation of the procedures proposed in this paper and as the code for running the experiments are provided in https://github.com/sivansabato/unfairness.
Related papers
- Correcting Underrepresentation and Intersectional Bias for Classification [49.1574468325115]
We consider the problem of learning from data corrupted by underrepresentation bias.
We show that with a small amount of unbiased data, we can efficiently estimate the group-wise drop-out rates.
We show that our algorithm permits efficient learning for model classes of finite VC dimension.
arXiv Detail & Related papers (2023-06-19T18:25:44Z) - Arbitrariness and Social Prediction: The Confounding Role of Variance in
Fair Classification [31.392067805022414]
Variance in predictions across different trained models is a significant, under-explored source of error in fair binary classification.
In practice, the variance on some data examples is so large that decisions can be effectively arbitrary.
We develop an ensembling algorithm that abstains from classification when a prediction would be arbitrary.
arXiv Detail & Related papers (2023-01-27T06:52:04Z) - Realistic Evaluation of Transductive Few-Shot Learning [41.06192162435249]
Transductive inference is widely used in few-shot learning.
We study the effect of arbitrary class distributions within the query sets of few-shot tasks at inference.
We evaluate experimentally state-of-the-art transductive methods over 3 widely used data sets.
arXiv Detail & Related papers (2022-04-24T03:35:06Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - When in Doubt: Improving Classification Performance with Alternating
Normalization [57.39356691967766]
We introduce Classification with Alternating Normalization (CAN), a non-parametric post-processing step for classification.
CAN improves classification accuracy for challenging examples by re-adjusting their predicted class probability distribution.
We empirically demonstrate its effectiveness across a diverse set of classification tasks.
arXiv Detail & Related papers (2021-09-28T02:55:42Z) - Multi-Class Classification from Single-Class Data with Confidences [90.48669386745361]
We propose an empirical risk minimization framework that is loss-/model-/optimizer-independent.
We show that our method can be Bayes-consistent with a simple modification even if the provided confidences are highly noisy.
arXiv Detail & Related papers (2021-06-16T15:38:13Z) - A Statistical Test for Probabilistic Fairness [11.95891442664266]
We propose a statistical hypothesis test for detecting unfair classifiers.
We show both theoretically as well as empirically that the proposed test is correct.
In addition, the proposed framework offers interpretability by identifying the most favorable perturbation of the data.
arXiv Detail & Related papers (2020-12-09T00:20:02Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Fairness with Overlapping Groups [15.154984899546333]
A standard goal is to ensure the equality of fairness metrics across multiple overlapping groups simultaneously.
We reconsider this standard fair classification problem using a probabilistic population analysis.
Our approach unifies a variety of existing group-fair classification methods and enables extensions to a wide range of non-decomposable multiclass performance metrics and fairness measures.
arXiv Detail & Related papers (2020-06-24T05:01:10Z) - Classifier uncertainty: evidence, potential impact, and probabilistic
treatment [0.0]
We present an approach to quantify the uncertainty of classification performance metrics based on a probability model of the confusion matrix.
We show that uncertainties can be surprisingly large and limit performance evaluation.
arXiv Detail & Related papers (2020-06-19T12:49:19Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.