Principal Fairness for Human and Algorithmic Decision-Making
- URL: http://arxiv.org/abs/2005.10400v5
- Date: Thu, 24 Mar 2022 20:35:58 GMT
- Title: Principal Fairness for Human and Algorithmic Decision-Making
- Authors: Kosuke Imai, Zhichao Jiang
- Abstract summary: We introduce a new notion of fairness, called principal fairness, for human and algorithmic decision-making.
Unlike the existing statistical definitions of fairness, principal fairness explicitly accounts for the fact that individuals can be impacted by the decision.
- Score: 1.2691047660244335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Using the concept of principal stratification from the causal inference
literature, we introduce a new notion of fairness, called principal fairness,
for human and algorithmic decision-making. The key idea is that one should not
discriminate among individuals who would be similarly affected by the decision.
Unlike the existing statistical definitions of fairness, principal fairness
explicitly accounts for the fact that individuals can be impacted by the
decision. Furthermore, we explain how principal fairness differs from the
existing causality-based fairness criteria. In contrast to the counterfactual
fairness criteria, for example, principal fairness considers the effects of
decision in question rather than those of protected attributes of interest. We
briefly discuss how to approach empirical evaluation and policy learning
problems under the proposed principal fairness criterion.
Related papers
- Subjective fairness in algorithmic decision-support [0.0]
The treatment of fairness in decision-making literature usually involves quantifying fairness using objective measures.
This work takes a critical stance to highlight the limitations of these approaches using sociological insights.
We redefine fairness as a subjective property moving from a top-down to a bottom-up approach.
arXiv Detail & Related papers (2024-06-28T14:37:39Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Causal Conceptions of Fairness and their Consequences [1.9006392177894293]
We show that two families of causal definitions of algorithmic fairness result in strongly dominated decision policies.
We prove the resulting policies require admitting all students with the same probability, regardless of academic qualifications or group membership.
arXiv Detail & Related papers (2022-07-12T04:26:26Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Promises and Challenges of Causality for Ethical Machine Learning [2.1946447418179664]
We lay out the conditions for appropriate application of causal fairness under the "potential outcomes framework"
We highlight key aspects of causal inference that are often ignored in the causal fairness literature.
We argue that such conceptualization of the intervention is key in evaluating the validity of causal assumptions.
arXiv Detail & Related papers (2022-01-26T00:04:10Z) - Legal perspective on possible fairness measures - A legal discussion
using the example of hiring decisions (preprint) [0.0]
We explain the different kinds of fairness concepts that might be applicable for the specific application of hiring decisions.
We analyze their pros and cons with regard to the respective fairness interpretation and evaluate them from a legal perspective.
arXiv Detail & Related papers (2021-08-16T06:41:39Z) - On the Fairness of Causal Algorithmic Recourse [36.519629650529666]
We propose two new fairness criteria at the group and individual level.
We show that fairness of recourse is complementary to fairness of prediction.
We discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions.
arXiv Detail & Related papers (2020-10-13T16:35:06Z) - Algorithmic Decision Making with Conditional Fairness [48.76267073341723]
We define conditional fairness as a more sound fairness metric by conditioning on the fairness variables.
We propose a Derivable Conditional Fairness Regularizer (DCFR) to track the trade-off between precision and fairness of algorithmic decision making.
arXiv Detail & Related papers (2020-06-18T12:56:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.