Pairwise Fairness for Ordinal Regression
- URL: http://arxiv.org/abs/2105.03153v1
- Date: Fri, 7 May 2021 10:33:42 GMT
- Title: Pairwise Fairness for Ordinal Regression
- Authors: Matth\"aus Kleindessner, Samira Samadi, Muhammad Bilal Zafar,
Krishnaram Kenthapadi, Chris Russell
- Abstract summary: We adapt two fairness notions previously considered in fair ranking and propose a strategy for training a predictor that is approximately fair according to either notion.
Our predictor consists of a threshold model, composed of a scoring function and a set of thresholds.
We show that our strategy allows us to effectively explore the accuracy-vs-fairness trade-off and that it often compares favorably to "unfair" state-of-the-art methods for ordinal regression.
- Score: 22.838858781036574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We initiate the study of fairness for ordinal regression, or ordinal
classification. We adapt two fairness notions previously considered in fair
ranking and propose a strategy for training a predictor that is approximately
fair according to either notion. Our predictor consists of a threshold model,
composed of a scoring function and a set of thresholds, and our strategy is
based on a reduction to fair binary classification for learning the scoring
function and local search for choosing the thresholds. We can control the
extent to which we care about the accuracy vs the fairness of the predictor via
a parameter. In extensive experiments we show that our strategy allows us to
effectively explore the accuracy-vs-fairness trade-off and that it often
compares favorably to "unfair" state-of-the-art methods for ordinal regression
in that it yields predictors that are only slightly less accurate, but
significantly more fair.
Related papers
- Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Bayes-Optimal Fair Classification with Linear Disparity Constraints via
Pre-, In-, and Post-processing [32.5214395114507]
We develop methods for Bayes-optimal fair classification, aiming to minimize classification error subject to given group fairness constraints.
We show that several popular disparity measures -- the deviations from demographic parity, equality of opportunity, and predictive equality -- are bilinear.
Our methods control disparity directly while achieving near-optimal fairness-accuracy tradeoffs.
arXiv Detail & Related papers (2024-02-05T08:59:47Z) - Counterfactual Fairness for Predictions using Generative Adversarial
Networks [28.65556399421874]
We develop a novel deep neural network called Generative Counterfactual Fairness Network (GCFN) for making predictions under counterfactual fairness.
Our method is mathematically guaranteed to ensure the notion of counterfactual fairness.
arXiv Detail & Related papers (2023-10-26T17:58:39Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - When Does Confidence-Based Cascade Deferral Suffice? [69.28314307469381]
Cascades are a classical strategy to enable inference cost to vary adaptively across samples.
A deferral rule determines whether to invoke the next classifier in the sequence, or to terminate prediction.
Despite being oblivious to the structure of the cascade, confidence-based deferral often works remarkably well in practice.
arXiv Detail & Related papers (2023-07-06T04:13:57Z) - Arbitrariness and Social Prediction: The Confounding Role of Variance in
Fair Classification [31.392067805022414]
Variance in predictions across different trained models is a significant, under-explored source of error in fair binary classification.
In practice, the variance on some data examples is so large that decisions can be effectively arbitrary.
We develop an ensembling algorithm that abstains from classification when a prediction would be arbitrary.
arXiv Detail & Related papers (2023-01-27T06:52:04Z) - Towards Fair Classification against Poisoning Attacks [52.57443558122475]
We study the poisoning scenario where the attacker can insert a small fraction of samples into training data.
We propose a general and theoretically guaranteed framework which accommodates traditional defense methods to fair classification against poisoning attacks.
arXiv Detail & Related papers (2022-10-18T00:49:58Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - On the Fairness of Causal Algorithmic Recourse [36.519629650529666]
We propose two new fairness criteria at the group and individual level.
We show that fairness of recourse is complementary to fairness of prediction.
We discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions.
arXiv Detail & Related papers (2020-10-13T16:35:06Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Fair Regression with Wasserstein Barycenters [39.818025466204055]
We study the problem of learning a real-valued function that satisfies the Demographic Parity constraint.
It demands the distribution of the predicted output to be independent of the sensitive attribute.
We establish a connection between fair regression and optimal transport theory, based on which we derive a close form expression for the optimal fair predictor.
arXiv Detail & Related papers (2020-06-12T16:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.