Reconciling Predictive and Statistical Parity: A Causal Approach
- URL: http://arxiv.org/abs/2306.05059v2
- Date: Fri, 22 Dec 2023 13:22:17 GMT
- Title: Reconciling Predictive and Statistical Parity: A Causal Approach
- Authors: Drago Plecko, Elias Bareinboim
- Abstract summary: We propose a new causal decomposition formula for the fairness measures associated with predictive parity.
We show that the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions.
- Score: 68.59381759875734
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Since the rise of fair machine learning as a critical field of inquiry, many
different notions on how to quantify and measure discrimination have been
proposed in the literature. Some of these notions, however, were shown to be
mutually incompatible. Such findings make it appear that numerous different
kinds of fairness exist, thereby making a consensus on the appropriate measure
of fairness harder to reach, hindering the applications of these tools in
practice. In this paper, we investigate one of these key impossibility results
that relates the notions of statistical and predictive parity. Specifically, we
derive a new causal decomposition formula for the fairness measures associated
with predictive parity, and obtain a novel insight into how this criterion is
related to statistical parity through the legal doctrines of disparate
treatment, disparate impact, and the notion of business necessity. Our results
show that through a more careful causal analysis, the notions of statistical
and predictive parity are not really mutually exclusive, but complementary and
spanning a spectrum of fairness notions through the concept of business
necessity. Finally, we demonstrate the importance of our findings on a
real-world example.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - The Flawed Foundations of Fair Machine Learning [0.0]
We show that there is a trade-off between statistically accurate outcomes and group similar outcomes in any data setting where group disparities exist.
We introduce a proof-of-concept evaluation to aid researchers and designers in understanding the relationship between statistically accurate outcomes and group similar outcomes.
arXiv Detail & Related papers (2023-06-02T10:07:12Z) - Fairness and robustness in anti-causal prediction [73.693135253335]
Robustness to distribution shift and fairness have independently emerged as two important desiderata required of machine learning models.
While these two desiderata seem related, the connection between them is often unclear in practice.
By taking this perspective, we draw explicit connections between a common fairness criterion - separation - and a common notion of robustness.
arXiv Detail & Related papers (2022-09-20T02:41:17Z) - What-is and How-to for Fairness in Machine Learning: A Survey,
Reflection, and Perspective [13.124434298120494]
We review and reflect on various fairness notions previously proposed in machine learning literature.
We also consider the long-term impact that is induced by current prediction and decision.
This paper demonstrates the importance of matching the mission (which kind of fairness one would like to enforce) and the means (which spectrum of fairness analysis is of interest) to fulfill the intended purpose.
arXiv Detail & Related papers (2022-06-08T18:05:46Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Promises and Challenges of Causality for Ethical Machine Learning [2.1946447418179664]
We lay out the conditions for appropriate application of causal fairness under the "potential outcomes framework"
We highlight key aspects of causal inference that are often ignored in the causal fairness literature.
We argue that such conceptualization of the intervention is key in evaluating the validity of causal assumptions.
arXiv Detail & Related papers (2022-01-26T00:04:10Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Survey on Causal-based Machine Learning Fairness Notions [4.157415305926584]
This paper examines an exhaustive list of causal-based fairness notions and study their applicability in real-world scenarios.
As the majority of causal-based fairness notions are defined in terms of non-observable quantities, their deployment in practice requires to compute or estimate those quantities.
arXiv Detail & Related papers (2020-10-19T14:28:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.