Promises and Challenges of Causality for Ethical Machine Learning
- URL: http://arxiv.org/abs/2201.10683v2
- Date: Wed, 26 Oct 2022 17:59:02 GMT
- Title: Promises and Challenges of Causality for Ethical Machine Learning
- Authors: Aida Rahmattalabi, Alice Xiang
- Abstract summary: We lay out the conditions for appropriate application of causal fairness under the "potential outcomes framework"
We highlight key aspects of causal inference that are often ignored in the causal fairness literature.
We argue that such conceptualization of the intervention is key in evaluating the validity of causal assumptions.
- Score: 2.1946447418179664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, there has been increasing interest in causal reasoning for
designing fair decision-making systems due to its compatibility with legal
frameworks, interpretability for human stakeholders, and robustness to spurious
correlations inherent in observational data, among other factors. The recent
attention to causal fairness, however, has been accompanied with great
skepticism due to practical and epistemological challenges with applying
current causal fairness approaches in the literature. Motivated by the
long-standing empirical work on causality in econometrics, social sciences, and
biomedical sciences, in this paper we lay out the conditions for appropriate
application of causal fairness under the "potential outcomes framework." We
highlight key aspects of causal inference that are often ignored in the causal
fairness literature. In particular, we discuss the importance of specifying the
nature and timing of interventions on social categories such as race or gender.
Precisely, instead of postulating an intervention on immutable attributes, we
propose a shift in focus to their perceptions and discuss the implications for
fairness evaluation. We argue that such conceptualization of the intervention
is key in evaluating the validity of causal assumptions and conducting sound
causal analysis including avoiding post-treatment bias. Subsequently, we
illustrate how causality can address the limitations of existing fairness
metrics, including those that depend upon statistical correlations.
Specifically, we introduce causal variants of common statistical notions of
fairness, and we make a novel observation that under the causal framework there
is no fundamental disagreement between different notions of fairness. Finally,
we conduct extensive experiments where we demonstrate our approach for
evaluating and mitigating unfairness, specially when post-treatment variables
are present.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Reconciling Predictive and Statistical Parity: A Causal Approach [68.59381759875734]
We propose a new causal decomposition formula for the fairness measures associated with predictive parity.
We show that the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions.
arXiv Detail & Related papers (2023-06-08T09:23:22Z) - Fairness and robustness in anti-causal prediction [73.693135253335]
Robustness to distribution shift and fairness have independently emerged as two important desiderata required of machine learning models.
While these two desiderata seem related, the connection between them is often unclear in practice.
By taking this perspective, we draw explicit connections between a common fairness criterion - separation - and a common notion of robustness.
arXiv Detail & Related papers (2022-09-20T02:41:17Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - On the Need and Applicability of Causality for Fair Machine Learning [0.0]
We argue that causality is crucial in evaluating the fairness of automated decisions.
We point out the social impact of non-causal predictions and the legal anti-discrimination process that relies on causal claims.
arXiv Detail & Related papers (2022-07-08T10:37:22Z) - What-is and How-to for Fairness in Machine Learning: A Survey,
Reflection, and Perspective [13.124434298120494]
We review and reflect on various fairness notions previously proposed in machine learning literature.
We also consider the long-term impact that is induced by current prediction and decision.
This paper demonstrates the importance of matching the mission (which kind of fairness one would like to enforce) and the means (which spectrum of fairness analysis is of interest) to fulfill the intended purpose.
arXiv Detail & Related papers (2022-06-08T18:05:46Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Survey on Causal-based Machine Learning Fairness Notions [4.157415305926584]
This paper examines an exhaustive list of causal-based fairness notions and study their applicability in real-world scenarios.
As the majority of causal-based fairness notions are defined in terms of non-observable quantities, their deployment in practice requires to compute or estimate those quantities.
arXiv Detail & Related papers (2020-10-19T14:28:55Z) - On the Fairness of Causal Algorithmic Recourse [36.519629650529666]
We propose two new fairness criteria at the group and individual level.
We show that fairness of recourse is complementary to fairness of prediction.
We discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions.
arXiv Detail & Related papers (2020-10-13T16:35:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.