Fair Bayes-Optimal Classifiers Under Predictive Parity
- URL: http://arxiv.org/abs/2205.07182v1
- Date: Sun, 15 May 2022 04:58:10 GMT
- Title: Fair Bayes-Optimal Classifiers Under Predictive Parity
- Authors: Xianli Zeng, Edgar Dobriban and Guang Cheng
- Abstract summary: This paper considers predictive parity, which requires equalizing the probability of success given a positive prediction among different protected groups.
We propose an algorithm we call FairBayes-DPP, aiming to ensure predictive parity when our condition is satisfied.
- Score: 33.648053823193855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Increasing concerns about disparate effects of AI have motivated a great deal
of work on fair machine learning. Existing works mainly focus on independence-
and separation-based measures (e.g., demographic parity, equality of
opportunity, equalized odds), while sufficiency-based measures such as
predictive parity are much less studied. This paper considers predictive
parity, which requires equalizing the probability of success given a positive
prediction among different protected groups. We prove that, if the overall
performances of different groups vary only moderately, all fair Bayes-optimal
classifiers under predictive parity are group-wise thresholding rules. Perhaps
surprisingly, this may not hold if group performance levels vary widely; in
this case we find that predictive parity among protected groups may lead to
within-group unfairness. We then propose an algorithm we call FairBayes-DPP,
aiming to ensure predictive parity when our condition is satisfied.
FairBayes-DPP is an adaptive thresholding algorithm that aims to achieve
predictive parity, while also seeking to maximize test accuracy. We provide
supporting experiments conducted on synthetic and empirical data.
Related papers
- Conformal Prediction Sets Can Cause Disparate Impact [4.61590049339329]
Conformal prediction is a promising method for quantifying the uncertainty of machine learning models.
We show that providing prediction sets can increase the unfairness of their decisions.
Instead of equalizing coverage, we propose to equalize set sizes across groups which empirically leads to more fair outcomes.
arXiv Detail & Related papers (2024-10-02T18:00:01Z) - Assessing Group Fairness with Social Welfare Optimization [0.9217021281095907]
This paper explores whether a broader conception of social justice, based on optimizing a social welfare function, can be useful for assessing various definitions of parity.
We show that it can justify demographic parity or equalized odds under certain conditions, but frequently requires a departure from these types of parity.
In addition, we find that predictive rate parity is of limited usefulness.
arXiv Detail & Related papers (2024-05-19T01:41:04Z) - Equal Opportunity of Coverage in Fair Regression [50.76908018786335]
We study fair machine learning (ML) under predictive uncertainty to enable reliable and trustworthy decision-making.
We propose Equal Opportunity of Coverage (EOC) that aims to achieve two properties: (1) coverage rates for different groups with similar outcomes are close, and (2) the coverage rate for the entire population remains at a predetermined level.
arXiv Detail & Related papers (2023-11-03T21:19:59Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - FaiREE: Fair Classification with Finite-Sample and Distribution-Free
Guarantee [40.10641140860374]
FaiREE is a fair classification algorithm that can satisfy group fairness constraints with finite-sample and distribution-free theoretical guarantees.
FaiREE is shown to have favorable performance over state-of-the-art algorithms.
arXiv Detail & Related papers (2022-11-28T05:16:20Z) - Enforcing Group Fairness in Algorithmic Decision Making: Utility
Maximization Under Sufficiency [0.0]
This paper focuses on the fairness concepts of PPV parity, false omission rate (FOR) parity, and sufficiency.
We show that group-specific threshold rules are optimal for PPV parity and FOR parity.
We also provide a solution for the optimal decision rules satisfying the fairness constraint sufficiency.
arXiv Detail & Related papers (2022-06-05T18:47:34Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Bayes-Optimal Classifiers under Group Fairness [32.52143951145071]
This paper provides a unified framework for deriving Bayes-optimal classifiers under group fairness.
We propose a group-based thresholding method we call FairBayes, that can directly control disparity and achieve an essentially optimal fairness-accuracy tradeoff.
arXiv Detail & Related papers (2022-02-20T03:35:44Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Deconfounding Scores: Feature Representations for Causal Effect
Estimation with Weak Overlap [140.98628848491146]
We introduce deconfounding scores, which induce better overlap without biasing the target of estimation.
We show that deconfounding scores satisfy a zero-covariance condition that is identifiable in observed data.
In particular, we show that this technique could be an attractive alternative to standard regularizations.
arXiv Detail & Related papers (2021-04-12T18:50:11Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.