Why Fairness Cannot Be Automated: Bridging the Gap Between EU
Non-Discrimination Law and AI
- URL: http://arxiv.org/abs/2005.05906v1
- Date: Tue, 12 May 2020 16:30:12 GMT
- Title: Why Fairness Cannot Be Automated: Bridging the Gap Between EU
Non-Discrimination Law and AI
- Authors: Sandra Wachter, Brent Mittelstadt, Chris Russell
- Abstract summary: Article identifies a critical incompatibility between European notions of discrimination and existing statistical measures of fairness.
We show how the legal protection offered by non-discrimination law is challenged when AI, not humans, discriminate.
We propose "conditional demographic disparity" (CDD) as a standard baseline statistical measurement.
- Score: 10.281644134255576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article identifies a critical incompatibility between European notions
of discrimination and existing statistical measures of fairness. First, we
review the evidential requirements to bring a claim under EU non-discrimination
law. Due to the disparate nature of algorithmic and human discrimination, the
EU's current requirements are too contextual, reliant on intuition, and open to
judicial interpretation to be automated. Second, we show how the legal
protection offered by non-discrimination law is challenged when AI, not humans,
discriminate. Humans discriminate due to negative attitudes (e.g. stereotypes,
prejudice) and unintentional biases (e.g. organisational practices or
internalised stereotypes) which can act as a signal to victims that
discrimination has occurred. Finally, we examine how existing work on fairness
in machine learning lines up with procedures for assessing cases under EU
non-discrimination law. We propose "conditional demographic disparity" (CDD) as
a standard baseline statistical measurement that aligns with the European Court
of Justice's "gold standard." Establishing a standard set of statistical
evidence for automated discrimination cases can help ensure consistent
procedures for assessment, but not judicial interpretation, of cases involving
AI and automated systems. Through this proposal for procedural regularity in
the identification and assessment of automated discrimination, we clarify how
to build considerations of fairness into automated systems as far as possible
while still respecting and enabling the contextual approach to judicial
interpretation practiced under EU non-discrimination law.
N.B. Abridged abstract
Related papers
- Formalising Anti-Discrimination Law in Automated Decision Systems [1.560976479364936]
We study the legal challenges in automated decision-making by analysing conventional algorithmic fairness approaches.
By translating principles of anti-discrimination law into a decision-theoretic framework, we formalise discrimination.
We propose a new, legally informed approach to developing systems for automated decision-making.
arXiv Detail & Related papers (2024-06-29T10:59:21Z) - Auditing for Racial Discrimination in the Delivery of Education Ads [50.37313459134418]
We propose a new third-party auditing method that can evaluate racial bias in the delivery of ads for education opportunities.
We find evidence of racial discrimination in Meta's algorithmic delivery of ads for education opportunities, posing legal and ethical concerns.
arXiv Detail & Related papers (2024-06-02T02:00:55Z) - Unlawful Proxy Discrimination: A Framework for Challenging Inherently Discriminatory Algorithms [4.1221687771754]
EU legal concept of direct discrimination may apply to various algorithmic decision-making contexts.
Unlike indirect discrimination, there is generally no 'objective justification' stage in the direct discrimination framework.
We focus on the most likely candidate for direct discrimination in the algorithmic context.
arXiv Detail & Related papers (2024-04-22T10:06:17Z) - Non-discrimination law in Europe: a primer for non-lawyers [44.715854387549605]
We aim to describe the law in such a way that non-lawyers and non-European lawyers can easily grasp its contents and challenges.
We introduce the EU-wide non-discrimination rules which are included in a number of EU directives.
The last section broadens the horizon to include bias-relevant law and cases from the EU AI Act, and related statutes.
arXiv Detail & Related papers (2024-04-12T14:59:58Z) - Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness [1.5029560229270191]
The topic of fairness in AI has sparked meaningful discussions in the past years.
From a legal perspective, many open questions remain.
The AI Act might present a tremendous step towards bridging these two approaches.
arXiv Detail & Related papers (2024-03-29T09:54:09Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Compatibility of Fairness Metrics with EU Non-Discrimination Laws:
Demographic Parity & Conditional Demographic Disparity [3.5607241839298878]
Empirical evidence suggests that algorithmic decisions driven by Machine Learning (ML) techniques threaten to discriminate against legally protected groups or create new sources of unfairness.
This work aims at assessing up to what point we can assure legal fairness through fairness metrics and under fairness constraints.
Our experiments and analysis suggest that AI-assisted decision-making can be fair from a legal perspective depending on the case at hand and the legal justification.
arXiv Detail & Related papers (2023-06-14T09:38:05Z) - Algorithmic Unfairness through the Lens of EU Non-Discrimination Law: Or
Why the Law is not a Decision Tree [5.153559154345212]
We show that EU non-discrimination law coincides with notions of algorithmic fairness proposed in computer science literature.
We set out the normative underpinnings of fairness metrics and technical interventions and compare these to the legal reasoning of the Court of Justice of the EU.
We conclude with implications for AI practitioners and regulators.
arXiv Detail & Related papers (2023-05-05T12:00:39Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.