Algorithmic Unfairness through the Lens of EU Non-Discrimination Law: Or
Why the Law is not a Decision Tree
- URL: http://arxiv.org/abs/2305.13938v2
- Date: Wed, 24 May 2023 20:11:38 GMT
- Title: Algorithmic Unfairness through the Lens of EU Non-Discrimination Law: Or
Why the Law is not a Decision Tree
- Authors: Hilde Weerts, Rapha\"ele Xenidis, Fabien Tarissan, Henrik Palmer
Olsen, Mykola Pechenizkiy
- Abstract summary: We show that EU non-discrimination law coincides with notions of algorithmic fairness proposed in computer science literature.
We set out the normative underpinnings of fairness metrics and technical interventions and compare these to the legal reasoning of the Court of Justice of the EU.
We conclude with implications for AI practitioners and regulators.
- Score: 5.153559154345212
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Concerns regarding unfairness and discrimination in the context of artificial
intelligence (AI) systems have recently received increased attention from both
legal and computer science scholars. Yet, the degree of overlap between notions
of algorithmic bias and fairness on the one hand, and legal notions of
discrimination and equality on the other, is often unclear, leading to
misunderstandings between computer science and law. What types of bias and
unfairness does the law address when it prohibits discrimination? What role can
fairness metrics play in establishing legal compliance? In this paper, we aim
to illustrate to what extent European Union (EU) non-discrimination law
coincides with notions of algorithmic fairness proposed in computer science
literature and where they differ. The contributions of this paper are as
follows. First, we analyse seminal examples of algorithmic unfairness through
the lens of EU non-discrimination law, drawing parallels with EU case law.
Second, we set out the normative underpinnings of fairness metrics and
technical interventions and compare these to the legal reasoning of the Court
of Justice of the EU. Specifically, we show how normative assumptions often
remain implicit in both disciplinary approaches and explain the ensuing
limitations of current AI practice and non-discrimination law. We conclude with
implications for AI practitioners and regulators.
Related papers
- Non-discrimination law in Europe: a primer for non-lawyers [44.715854387549605]
We aim to describe the law in such a way that non-lawyers and non-European lawyers can easily grasp its contents and challenges.
We introduce the EU-wide non-discrimination rules which are included in a number of EU directives.
The last section broadens the horizon to include bias-relevant law and cases from the EU AI Act, and related statutes.
arXiv Detail & Related papers (2024-04-12T14:59:58Z) - Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness [1.5029560229270191]
The topic of fairness in AI has sparked meaningful discussions in the past years.
From a legal perspective, many open questions remain.
The AI Act might present a tremendous step towards bridging these two approaches.
arXiv Detail & Related papers (2024-03-29T09:54:09Z) - DELTA: Pre-train a Discriminative Encoder for Legal Case Retrieval via Structural Word Alignment [55.91429725404988]
We introduce DELTA, a discriminative model designed for legal case retrieval.
We leverage shallow decoders to create information bottlenecks, aiming to enhance the representation ability.
Our approach can outperform existing state-of-the-art methods in legal case retrieval.
arXiv Detail & Related papers (2024-03-27T10:40:14Z) - Compatibility of Fairness Metrics with EU Non-Discrimination Laws:
Demographic Parity & Conditional Demographic Disparity [3.5607241839298878]
Empirical evidence suggests that algorithmic decisions driven by Machine Learning (ML) techniques threaten to discriminate against legally protected groups or create new sources of unfairness.
This work aims at assessing up to what point we can assure legal fairness through fairness metrics and under fairness constraints.
Our experiments and analysis suggest that AI-assisted decision-making can be fair from a legal perspective depending on the case at hand and the legal justification.
arXiv Detail & Related papers (2023-06-14T09:38:05Z) - Exploiting Contrastive Learning and Numerical Evidence for Confusing
Legal Judgment Prediction [46.71918729837462]
Given the fact description text of a legal case, legal judgment prediction aims to predict the case's charge, law article and penalty term.
Previous studies fail to distinguish different classification errors with a standard cross-entropy classification loss.
We propose a moco-based supervised contrastive learning to learn distinguishable representations.
We further enhance the representation of the fact description with extracted crime amounts which are encoded by a pre-trained numeracy model.
arXiv Detail & Related papers (2022-11-15T15:53:56Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Transparency, Compliance, And Contestability When Code Is(n't) Law [91.85674537754346]
Both technical security mechanisms and legal processes serve as mechanisms to deal with misbehaviour according to a set of norms.
While they share general similarities, there are also clear differences in how they are defined, act, and the effect they have on subjects.
This paper considers the similarities and differences between both types of mechanisms as ways of dealing with misbehaviour.
arXiv Detail & Related papers (2022-05-08T18:03:07Z) - Affirmative Algorithms: The Legal Grounds for Fairness as Awareness [0.0]
We discuss how such approaches will likely be deemed "algorithmic affirmative action"
We argue that the government-contracting cases offer an alternative grounding for algorithmic fairness.
We call for more research at the intersection of algorithmic fairness and causal inference to ensure that bias mitigation is tailored to specific causes and mechanisms of bias.
arXiv Detail & Related papers (2020-12-18T22:53:20Z) - Why Fairness Cannot Be Automated: Bridging the Gap Between EU
Non-Discrimination Law and AI [10.281644134255576]
Article identifies a critical incompatibility between European notions of discrimination and existing statistical measures of fairness.
We show how the legal protection offered by non-discrimination law is challenged when AI, not humans, discriminate.
We propose "conditional demographic disparity" (CDD) as a standard baseline statistical measurement.
arXiv Detail & Related papers (2020-05-12T16:30:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.