Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness
- URL: http://arxiv.org/abs/2403.20089v2
- Date: Wed, 26 Jun 2024 07:35:30 GMT
- Title: Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness
- Authors: Luca Deck, Jan-Laurin Müller, Conradin Braun, Domenique Zipperling, Niklas Kühl,
- Abstract summary: The topic of fairness in AI has sparked meaningful discussions in the past years.
From a legal perspective, many open questions remain.
The AI Act might present a tremendous step towards bridging these two approaches.
- Score: 1.5029560229270191
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The topic of fairness in AI, as debated in the FATE (Fairness, Accountability, Transparency, and Ethics in AI) communities, has sparked meaningful discussions in the past years. However, from a legal perspective, particularly from the perspective of European Union law, many open questions remain. Whereas algorithmic fairness aims to mitigate structural inequalities at design-level, European non-discrimination law is tailored to individual cases of discrimination after an AI model has been deployed. The AI Act might present a tremendous step towards bridging these two approaches by shifting non-discrimination responsibilities into the design stage of AI models. Based on an integrative reading of the AI Act, we comment on legal as well as technical enforcement problems and propose practical implications on bias detection and bias correction in order to specify and comply with specific technical requirements.
Related papers
- It's complicated. The relationship of algorithmic fairness and non-discrimination regulations in the EU AI Act [2.9914612342004503]
The EU has recently passed the AI Act, which mandates specific rules for AI models.
This paper introduces both legal non-discrimination regulations and machine learning based algorithmic fairness concepts.
arXiv Detail & Related papers (2025-01-22T15:38:09Z) - On Algorithmic Fairness and the EU Regulations [0.2538209532048867]
The paper focuses on algorithmic fairness focusing on non-discrimination in the European Union (EU)
The paper demonstrates that correcting discriminatory biases in AI systems can be legally done under the EU regulations.
The paper contributes to the algorithmic fairness research with a few legal insights, enlarging and strengthening the growing research domain of compliance in AI engineering.
arXiv Detail & Related papers (2024-11-13T06:23:54Z) - The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template [55.2480439325792]
Article aims to fill existing gaps in the theoretical and methodological elaboration of the Fundamental Rights Impact Assessment (FRIA)
This article outlines the main building blocks of a model template for the FRIA.
It can serve as a blueprint for other national and international regulatory initiatives to ensure that AI is fully consistent with human rights.
arXiv Detail & Related papers (2024-11-07T11:55:55Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Formalising Anti-Discrimination Law in Automated Decision Systems [1.560976479364936]
We introduce a novel decision-theoretic framework grounded in anti-discrimination law of the United Kingdom.
We propose the 'conditional estimation parity' metric, which accounts for estimation error and the underlying data-generating process.
Our approach bridges the divide between machine learning fairness metrics and anti-discrimination law, offering a legally grounded framework for developing non-discriminatory automated decision systems.
arXiv Detail & Related papers (2024-06-29T10:59:21Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Report of the 1st Workshop on Generative AI and Law [78.62063815165968]
This report presents the takeaways of the inaugural Workshop on Generative AI and Law (GenLaw)
A cross-disciplinary group of practitioners and scholars from computer science and law convened to discuss the technical, doctrinal, and policy challenges presented by law for Generative AI.
arXiv Detail & Related papers (2023-11-11T04:13:37Z) - Algorithmic Unfairness through the Lens of EU Non-Discrimination Law: Or
Why the Law is not a Decision Tree [5.153559154345212]
We show that EU non-discrimination law coincides with notions of algorithmic fairness proposed in computer science literature.
We set out the normative underpinnings of fairness metrics and technical interventions and compare these to the legal reasoning of the Court of Justice of the EU.
We conclude with implications for AI practitioners and regulators.
arXiv Detail & Related papers (2023-05-05T12:00:39Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Fairness and Explainability in Automatic Decision-Making Systems. A
challenge for computer science and law [3.656085108168043]
The paper offers a contribution to the interdisciplinary constructs of analyzing fairness issues in automatic algorithmic decisions.
Section 1 shows that technical choices in supervised learning have social implications that need to be considered.
Section 2 proposes a contextual approach to the issue of unintended group discrimination.
arXiv Detail & Related papers (2022-05-14T01:08:47Z) - Transparency, Compliance, And Contestability When Code Is(n't) Law [91.85674537754346]
Both technical security mechanisms and legal processes serve as mechanisms to deal with misbehaviour according to a set of norms.
While they share general similarities, there are also clear differences in how they are defined, act, and the effect they have on subjects.
This paper considers the similarities and differences between both types of mechanisms as ways of dealing with misbehaviour.
arXiv Detail & Related papers (2022-05-08T18:03:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.