Fairness-Accuracy Trade-Offs: A Causal Perspective
- URL: http://arxiv.org/abs/2405.15443v1
- Date: Fri, 24 May 2024 11:19:52 GMT
- Title: Fairness-Accuracy Trade-Offs: A Causal Perspective
- Authors: Drago Plecko, Elias Bareinboim,
- Abstract summary: We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
- Score: 58.06306331390586
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Systems based on machine learning may exhibit discriminatory behavior based on sensitive characteristics such as gender, sex, religion, or race. In light of this, various notions of fairness and methods to quantify discrimination were proposed, leading to the development of numerous approaches for constructing fair predictors. At the same time, imposing fairness constraints may decrease the utility of the decision-maker, highlighting a tension between fairness and utility. This tension is also recognized in legal frameworks, for instance in the disparate impact doctrine of Title VII of the Civil Rights Act of 1964 -- in which specific attention is given to considerations of business necessity -- possibly allowing the usage of proxy variables associated with the sensitive attribute in case a high-enough utility cannot be achieved without them. In this work, we analyze the tension between fairness and accuracy from a causal lens for the first time. We introduce the notion of a path-specific excess loss (PSEL) that captures how much the predictor's loss increases when a causal fairness constraint is enforced. We then show that the total excess loss (TEL), defined as the difference between the loss of predictor fair along all causal pathways vs. an unconstrained predictor, can be decomposed into a sum of more local PSELs. At the same time, enforcing a causal constraint often reduces the disparity between demographic groups. Thus, we introduce a quantity that summarizes the fairness-utility trade-off, called the causal fairness/utility ratio, defined as the ratio of the reduction in discrimination vs. the excess loss from constraining a causal pathway. This quantity is suitable for comparing the fairness-utility trade-off across causal pathways. Finally, as our approach requires causally-constrained fair predictors, we introduce a new neural approach for causally-constrained fair learning.
Related papers
- What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - Interventional Fairness on Partially Known Causal Graphs: A Constrained
Optimization Approach [44.48385991344273]
We propose a framework for achieving causal fairness based on the notion of interventions when the true causal graph is partially known.
The proposed approach involves modeling fair prediction using a class of causal DAGs that can be learned from observational data combined with domain knowledge.
Results on both simulated and real-world datasets demonstrate the effectiveness of this method.
arXiv Detail & Related papers (2024-01-19T11:20:31Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Reconciling Predictive and Statistical Parity: A Causal Approach [68.59381759875734]
We propose a new causal decomposition formula for the fairness measures associated with predictive parity.
We show that the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions.
arXiv Detail & Related papers (2023-06-08T09:23:22Z) - Fairness and robustness in anti-causal prediction [73.693135253335]
Robustness to distribution shift and fairness have independently emerged as two important desiderata required of machine learning models.
While these two desiderata seem related, the connection between them is often unclear in practice.
By taking this perspective, we draw explicit connections between a common fairness criterion - separation - and a common notion of robustness.
arXiv Detail & Related papers (2022-09-20T02:41:17Z) - Understanding Instance-Level Impact of Fairness Constraints [12.866655972682254]
We study the influence of training examples when fairness constraints are imposed.
We find that training on a subset of weighty data examples leads to lower fairness violations with a trade-off of accuracy.
arXiv Detail & Related papers (2022-06-30T17:31:33Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Why Fair Labels Can Yield Unfair Predictions: Graphical Conditions for
Introduced Unfairness [14.710365964629066]
In addition to reproducing discriminatory relationships in the training data, machine learning systems can also introduce or amplify discriminatory effects.
We refer to this as introduced unfairness, and investigate the conditions under which it may arise.
We propose introduced total variation as a measure of introduced unfairness, and establish graphical conditions under which it may be incentivised to occur.
arXiv Detail & Related papers (2022-02-22T11:16:26Z) - Survey on Causal-based Machine Learning Fairness Notions [4.157415305926584]
This paper examines an exhaustive list of causal-based fairness notions and study their applicability in real-world scenarios.
As the majority of causal-based fairness notions are defined in terms of non-observable quantities, their deployment in practice requires to compute or estimate those quantities.
arXiv Detail & Related papers (2020-10-19T14:28:55Z) - On the Fairness of Causal Algorithmic Recourse [36.519629650529666]
We propose two new fairness criteria at the group and individual level.
We show that fairness of recourse is complementary to fairness of prediction.
We discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions.
arXiv Detail & Related papers (2020-10-13T16:35:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.