Legal perspective on possible fairness measures - A legal discussion
using the example of hiring decisions (preprint)
- URL: http://arxiv.org/abs/2108.06918v1
- Date: Mon, 16 Aug 2021 06:41:39 GMT
- Title: Legal perspective on possible fairness measures - A legal discussion
using the example of hiring decisions (preprint)
- Authors: Marc P Hauer, Johannes Kevekordes, Maryam Amir Haeri
- Abstract summary: We explain the different kinds of fairness concepts that might be applicable for the specific application of hiring decisions.
We analyze their pros and cons with regard to the respective fairness interpretation and evaluate them from a legal perspective.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the increasing use of AI in algorithmic decision making (e.g. based on
neural networks), the question arises how bias can be excluded or mitigated.
There are some promising approaches, but many of them are based on a "fair"
ground truth, others are based on a subjective goal to be reached, which leads
to the usual problem of how to define and compute "fairness". The different
functioning of algorithmic decision making in contrast to human decision making
leads to a shift from a process-oriented to a result-oriented discrimination
assessment. We argue that with such a shift society needs to determine which
kind of fairness is the right one to choose for which certain scenario. To
understand the implications of such a determination we explain the different
kinds of fairness concepts that might be applicable for the specific
application of hiring decisions, analyze their pros and cons with regard to the
respective fairness interpretation and evaluate them from a legal perspective
(based on EU law).
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Fairness in AI: challenges in bridging the gap between algorithms and law [2.651076518493962]
We identify best practices and strategies for the specification and adoption of fairness definitions and algorithms in real-world systems and use cases.
We introduce a set of core criteria that need to be taken into account when selecting a specific fairness definition for real-world use case applications.
arXiv Detail & Related papers (2024-04-30T08:59:00Z) - AI Fairness in Practice [0.46671368497079174]
There is a broad spectrum of views across society on what the concept of fairness means and how it should be put to practice.
This workbook explores how a context-based approach to understanding AI Fairness can help project teams better identify, mitigate, and manage the many ways that unfair bias and discrimination can crop up across the AI project workflow.
arXiv Detail & Related papers (2024-02-19T23:02:56Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Compatibility of Fairness Metrics with EU Non-Discrimination Laws:
Demographic Parity & Conditional Demographic Disparity [3.5607241839298878]
Empirical evidence suggests that algorithmic decisions driven by Machine Learning (ML) techniques threaten to discriminate against legally protected groups or create new sources of unfairness.
This work aims at assessing up to what point we can assure legal fairness through fairness metrics and under fairness constraints.
Our experiments and analysis suggest that AI-assisted decision-making can be fair from a legal perspective depending on the case at hand and the legal justification.
arXiv Detail & Related papers (2023-06-14T09:38:05Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - A Justice-Based Framework for the Analysis of Algorithmic
Fairness-Utility Trade-Offs [0.0]
In prediction-based decision-making systems, different perspectives can be at odds.
The short-term business goals of the decision makers are often in conflict with the decision subjects' wish to be treated fairly.
We propose a framework to make these value-laden choices clearly visible.
arXiv Detail & Related papers (2022-06-06T20:31:55Z) - The zoo of Fairness metrics in Machine Learning [62.997667081978825]
In recent years, the problem of addressing fairness in Machine Learning (ML) and automatic decision-making has attracted a lot of attention.
A plethora of different definitions of fairness in ML have been proposed, that consider different notions of what is a "fair decision" in situations impacting individuals in the population.
In this work, we try to make some order out of this zoo of definitions.
arXiv Detail & Related papers (2021-06-01T13:19:30Z) - Algorithmic Decision Making with Conditional Fairness [48.76267073341723]
We define conditional fairness as a more sound fairness metric by conditioning on the fairness variables.
We propose a Derivable Conditional Fairness Regularizer (DCFR) to track the trade-off between precision and fairness of algorithmic decision making.
arXiv Detail & Related papers (2020-06-18T12:56:28Z) - Principal Fairness for Human and Algorithmic Decision-Making [1.2691047660244335]
We introduce a new notion of fairness, called principal fairness, for human and algorithmic decision-making.
Unlike the existing statistical definitions of fairness, principal fairness explicitly accounts for the fact that individuals can be impacted by the decision.
arXiv Detail & Related papers (2020-05-21T00:24:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.