Distributive Justice and Fairness Metrics in Automated Decision-making:
How Much Overlap Is There?
- URL: http://arxiv.org/abs/2105.01441v2
- Date: Thu, 6 May 2021 10:26:34 GMT
- Title: Distributive Justice and Fairness Metrics in Automated Decision-making:
How Much Overlap Is There?
- Authors: Matthias Kuppler, Christoph Kern, Ruben L. Bach, Frauke Kreuter
- Abstract summary: We show that metrics implementing equality of opportunity only apply when resource allocations are based on deservingness, but fail when allocations should reflect concerns about egalitarianism, sufficiency, and priority.
We argue that by cleanly distinguishing between prediction tasks and decision tasks, research on fair machine learning could take better advantage of the rich literature on distributive justice.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The advent of powerful prediction algorithms led to increased automation of
high-stake decisions regarding the allocation of scarce resources such as
government spending and welfare support. This automation bears the risk of
perpetuating unwanted discrimination against vulnerable and historically
disadvantaged groups. Research on algorithmic discrimination in computer
science and other disciplines developed a plethora of fairness metrics to
detect and correct discriminatory algorithms. Drawing on robust sociological
and philosophical discourse on distributive justice, we identify the
limitations and problematic implications of prominent fairness metrics. We show
that metrics implementing equality of opportunity only apply when resource
allocations are based on deservingness, but fail when allocations should
reflect concerns about egalitarianism, sufficiency, and priority. We argue that
by cleanly distinguishing between prediction tasks and decision tasks, research
on fair machine learning could take better advantage of the rich literature on
distributive justice.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Equal Confusion Fairness: Measuring Group-Based Disparities in Automated
Decision Systems [5.076419064097733]
This paper proposes a new equal confusion fairness test to check an automated decision system for fairness and a new confusion parity error to quantify the extent of any unfairness.
Overall, the methods and metrics provided here may assess automated decision systems' fairness as part of a more extensive accountability assessment.
arXiv Detail & Related papers (2023-07-02T04:44:19Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - It's Not Fairness, and It's Not Fair: The Failure of Distributional
Equality and the Promise of Relational Equality in Complete-Information
Hiring Games [2.8935588665357077]
existing discrimination and injustice is often the result of unequal social relations, rather than an unequal distribution of resources.
We show how optimizing for existing computational and economic definitions of fairness and equality fail to prevent unequal social relations.
arXiv Detail & Related papers (2022-09-12T20:35:42Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Developing a Philosophical Framework for Fair Machine Learning: Lessons
From The Case of Algorithmic Collusion [0.0]
As machine learning algorithms are applied in new contexts the harms and injustices that result are qualitatively different.
The existing research paradigm in machine learning which develops metrics and definitions of fairness cannot account for these qualitatively different types of injustice.
I propose an ethical framework for researchers and practitioners in machine learning seeking to develop and apply fairness metrics.
arXiv Detail & Related papers (2022-07-05T16:21:56Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Affirmative Algorithms: The Legal Grounds for Fairness as Awareness [0.0]
We discuss how such approaches will likely be deemed "algorithmic affirmative action"
We argue that the government-contracting cases offer an alternative grounding for algorithmic fairness.
We call for more research at the intersection of algorithmic fairness and causal inference to ensure that bias mitigation is tailored to specific causes and mechanisms of bias.
arXiv Detail & Related papers (2020-12-18T22:53:20Z) - All of the Fairness for Edge Prediction with Optimal Transport [11.51786288978429]
We study the problem of fairness for the task of edge prediction in graphs.
We propose an embedding-agnostic repairing procedure for the adjacency matrix of an arbitrary graph with a trade-off between the group and individual fairness.
arXiv Detail & Related papers (2020-10-30T15:33:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.