Attainability and Optimality: The Equalized Odds Fairness Revisited
- URL: http://arxiv.org/abs/2202.11853v1
- Date: Thu, 24 Feb 2022 01:30:31 GMT
- Title: Attainability and Optimality: The Equalized Odds Fairness Revisited
- Authors: Zeyu Tang, Kun Zhang
- Abstract summary: We consider the attainability of the Equalized Odds notion of fairness.
For classification, we prove that compared to enforcing fairness by post-processing, one can always benefit from exploiting all available features.
While performance prediction can attain Equalized Odds with theoretical guarantees, we also discuss its limitation and potential negative social impacts.
- Score: 8.44348159032116
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fairness of machine learning algorithms has been of increasing interest. In
order to suppress or eliminate discrimination in prediction, various notions as
well as approaches have been proposed to impose fairness. Given a notion of
fairness, an essential problem is then whether or not it can always be
attained, even if with an unlimited amount of data. This issue is, however, not
well addressed yet. In this paper, focusing on the Equalized Odds notion of
fairness, we consider the attainability of this criterion and, furthermore, if
it is attainable, the optimality of the prediction performance under various
settings. In particular, for prediction performed by a deterministic function
of input features, we give conditions under which Equalized Odds can hold true;
if the stochastic prediction is acceptable, we show that under mild
assumptions, fair predictors can always be derived. For classification, we
further prove that compared to enforcing fairness by post-processing, one can
always benefit from exploiting all available features during training and get
potentially better prediction performance while remaining fair. Moreover, while
stochastic prediction can attain Equalized Odds with theoretical guarantees, we
also discuss its limitation and potential negative social impacts.
Related papers
- FairlyUncertain: A Comprehensive Benchmark of Uncertainty in Algorithmic Fairness [4.14360329494344]
We introduce FairlyUncertain, an axiomatic benchmark for evaluating uncertainty estimates in fairness.
Our benchmark posits that fair predictive uncertainty estimates should be consistent across learning pipelines and calibrated to observed randomness.
arXiv Detail & Related papers (2024-10-02T20:15:29Z) - Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Equal Opportunity of Coverage in Fair Regression [50.76908018786335]
We study fair machine learning (ML) under predictive uncertainty to enable reliable and trustworthy decision-making.
We propose Equal Opportunity of Coverage (EOC) that aims to achieve two properties: (1) coverage rates for different groups with similar outcomes are close, and (2) the coverage rate for the entire population remains at a predetermined level.
arXiv Detail & Related papers (2023-11-03T21:19:59Z) - Counterfactual Fairness for Predictions using Generative Adversarial
Networks [28.65556399421874]
We develop a novel deep neural network called Generative Counterfactual Fairness Network (GCFN) for making predictions under counterfactual fairness.
Our method is mathematically guaranteed to ensure the notion of counterfactual fairness.
arXiv Detail & Related papers (2023-10-26T17:58:39Z) - Understanding Fairness Surrogate Functions in Algorithmic Fairness [21.555040357521907]
We show that there is a surrogate-fairness gap between the fairness definition and the fairness surrogate function.
We elaborate a novel and general algorithm called Balanced Surrogate, which iteratively reduces the gap to mitigate unfairness.
arXiv Detail & Related papers (2023-10-17T12:40:53Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Fairness through Aleatoric Uncertainty [18.95295731419523]
We introduce the idea of leveraging aleatoric uncertainty (e.g., data ambiguity) to improve the fairness-utility trade-off.
Our central hypothesis is that aleatoric uncertainty is a key factor for algorithmic fairness.
We then propose a principled model to improve fairness when aleatoric uncertainty is high and improve utility elsewhere.
arXiv Detail & Related papers (2023-04-07T13:50:57Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Predictive Inference with Feature Conformal Prediction [80.77443423828315]
We propose feature conformal prediction, which extends the scope of conformal prediction to semantic feature spaces.
From a theoretical perspective, we demonstrate that feature conformal prediction provably outperforms regular conformal prediction under mild assumptions.
Our approach could be combined with not only vanilla conformal prediction, but also other adaptive conformal prediction methods.
arXiv Detail & Related papers (2022-10-01T02:57:37Z) - Prediction Sensitivity: Continual Audit of Counterfactual Fairness in
Deployed Classifiers [2.0625936401496237]
Traditional group fairness metrics can miss discrimination against individuals and are difficult to apply after deployment.
We present prediction sensitivity, an approach for continual audit of counterfactual fairness in deployed classifiers.
Our empirical results demonstrate that prediction sensitivity is effective for detecting violations of counterfactual fairness.
arXiv Detail & Related papers (2022-02-09T15:06:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.