Formalising Anti-Discrimination Law in Automated Decision Systems
- URL: http://arxiv.org/abs/2407.00400v2
- Date: Tue, 04 Feb 2025 21:17:19 GMT
- Title: Formalising Anti-Discrimination Law in Automated Decision Systems
- Authors: Holli Sargeant, Måns Magnusson,
- Abstract summary: We introduce a novel decision-theoretic framework grounded in anti-discrimination law of the United Kingdom.
We propose the 'conditional estimation parity' metric, which accounts for estimation error and the underlying data-generating process.
Our approach bridges the divide between machine learning fairness metrics and anti-discrimination law, offering a legally grounded framework for developing non-discriminatory automated decision systems.
- Score: 1.560976479364936
- License:
- Abstract: Algorithmic discrimination is a critical concern as machine learning models are used in high-stakes decision-making in legally protected contexts. Although substantial research on algorithmic bias and discrimination has led to the development of fairness metrics, several critical legal issues remain unaddressed in practice. To address these gaps, we introduce a novel decision-theoretic framework grounded in anti-discrimination law of the United Kingdom, which has global influence and aligns more closely with European and Commonwealth legal systems. We propose the 'conditional estimation parity' metric, which accounts for estimation error and the underlying data-generating process, aligning with legal standards. Through a real-world example based on an algorithmic credit discrimination case, we demonstrate the practical application of our formalism and provide insights for aligning fairness metrics with legal principles. Our approach bridges the divide between machine learning fairness metrics and anti-discrimination law, offering a legally grounded framework for developing non-discriminatory automated decision systems.
Related papers
- An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems [49.1574468325115]
We show that human rights already underpin the decisions in the field of data use.
This work presents a methodology and a model for a Human Rights Impact Assessment (HRIA)
The proposed methodology is tested in concrete case-studies to prove its feasibility and effectiveness.
arXiv Detail & Related papers (2024-07-30T16:27:52Z) - Auditing for Racial Discrimination in the Delivery of Education Ads [50.37313459134418]
We propose a new third-party auditing method that can evaluate racial bias in the delivery of ads for education opportunities.
We find evidence of racial discrimination in Meta's algorithmic delivery of ads for education opportunities, posing legal and ethical concerns.
arXiv Detail & Related papers (2024-06-02T02:00:55Z) - Unlawful Proxy Discrimination: A Framework for Challenging Inherently Discriminatory Algorithms [4.1221687771754]
EU legal concept of direct discrimination may apply to various algorithmic decision-making contexts.
Unlike indirect discrimination, there is generally no 'objective justification' stage in the direct discrimination framework.
We focus on the most likely candidate for direct discrimination in the algorithmic context.
arXiv Detail & Related papers (2024-04-22T10:06:17Z) - Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness [1.5029560229270191]
The topic of fairness in AI has sparked meaningful discussions in the past years.
From a legal perspective, many open questions remain.
The AI Act might present a tremendous step towards bridging these two approaches.
arXiv Detail & Related papers (2024-03-29T09:54:09Z) - DELTA: Pre-train a Discriminative Encoder for Legal Case Retrieval via Structural Word Alignment [55.91429725404988]
We introduce DELTA, a discriminative model designed for legal case retrieval.
We leverage shallow decoders to create information bottlenecks, aiming to enhance the representation ability.
Our approach can outperform existing state-of-the-art methods in legal case retrieval.
arXiv Detail & Related papers (2024-03-27T10:40:14Z) - Compatibility of Fairness Metrics with EU Non-Discrimination Laws:
Demographic Parity & Conditional Demographic Disparity [3.5607241839298878]
Empirical evidence suggests that algorithmic decisions driven by Machine Learning (ML) techniques threaten to discriminate against legally protected groups or create new sources of unfairness.
This work aims at assessing up to what point we can assure legal fairness through fairness metrics and under fairness constraints.
Our experiments and analysis suggest that AI-assisted decision-making can be fair from a legal perspective depending on the case at hand and the legal justification.
arXiv Detail & Related papers (2023-06-14T09:38:05Z) - Algorithmic Unfairness through the Lens of EU Non-Discrimination Law: Or
Why the Law is not a Decision Tree [5.153559154345212]
We show that EU non-discrimination law coincides with notions of algorithmic fairness proposed in computer science literature.
We set out the normative underpinnings of fairness metrics and technical interventions and compare these to the legal reasoning of the Court of Justice of the EU.
We conclude with implications for AI practitioners and regulators.
arXiv Detail & Related papers (2023-05-05T12:00:39Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Equality before the Law: Legal Judgment Consistency Analysis for
Fairness [55.91612739713396]
In this paper, we propose an evaluation metric for judgment inconsistency, Legal Inconsistency Coefficient (LInCo)
We simulate judges from different groups with legal judgment prediction (LJP) models and measure the judicial inconsistency with the disagreement of the judgment results given by LJP models trained on different groups.
We employ LInCo to explore the inconsistency in real cases and come to the following observations: (1) Both regional and gender inconsistency exist in the legal system, but gender inconsistency is much less than regional inconsistency.
arXiv Detail & Related papers (2021-03-25T14:28:00Z) - Why Fairness Cannot Be Automated: Bridging the Gap Between EU
Non-Discrimination Law and AI [10.281644134255576]
Article identifies a critical incompatibility between European notions of discrimination and existing statistical measures of fairness.
We show how the legal protection offered by non-discrimination law is challenged when AI, not humans, discriminate.
We propose "conditional demographic disparity" (CDD) as a standard baseline statistical measurement.
arXiv Detail & Related papers (2020-05-12T16:30:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.