The Illusion of Rights based AI Regulation
- URL: http://arxiv.org/abs/2503.05784v1
- Date: Thu, 27 Feb 2025 03:05:32 GMT
- Title: The Illusion of Rights based AI Regulation
- Authors: Yiyang Mei, Matthew Sag,
- Abstract summary: We show how EU AI regulation is the logical outgrowth of a particular cultural, political, and historical context.<n>We reject claims that the EU's regulatory framework and the substance of its rules should be adopted as universal imperatives.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Whether and how to regulate AI is one of the defining questions of our times - a question that is being debated locally, nationally, and internationally. We argue that much of this debate is proceeding on a false premise. Specifically, our article challenges the prevailing academic consensus that the European Union's AI regulatory framework is fundamentally rights-driven and the correlative presumption that other rights-regarding nations should therefore follow Europe's lead in AI regulation. Rather than taking rights language in EU rules and regulations at face value, we show how EU AI regulation is the logical outgrowth of a particular cultural, political, and historical context. We show that although instruments like the General Data Protection Regulation (GDPR) and the AI Act invoke the language of fundamental rights, these rights are instrumentalized - used as rhetorical cover for governance tools that address systemic risks and maintain institutional stability. As such, we reject claims that the EU's regulatory framework and the substance of its rules should be adopted as universal imperatives and transplanted to other liberal democracies. To add weight to our argument from historical context, we conduct a comparative analysis of AI regulation in five contested domains: data privacy, cybersecurity, healthcare, labor, and misinformation. This EU-US comparison shows that the EU's regulatory architecture is not meaningfully rights-based. Our article's key intervention in AI policy debates is not to suggest that the current American regulatory model is necessarily preferable but that the presumed legitimacy of the EU's AI regulatory approach must be abandoned.
Related papers
- Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.
Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - Pitfalls of Evidence-Based AI Policy [13.370321579091387]
We argue that if the goal is evidence-based AI policy, the first regulatory objective must be to actively facilitate the process of identifying, studying, and deliberating about AI risks.<n>We discuss a set of 15 regulatory goals to facilitate this and show that Brazil, Canada, China, the EU, South Korea, the UK, and the USA all have substantial opportunities to adopt further evidence-seeking policies.
arXiv Detail & Related papers (2025-02-13T18:59:30Z) - The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template [55.2480439325792]
Article aims to fill existing gaps in the theoretical and methodological elaboration of the Fundamental Rights Impact Assessment (FRIA)<n>This article outlines the main building blocks of a model template for the FRIA.<n>It can serve as a blueprint for other national and international regulatory initiatives to ensure that AI is fully consistent with human rights.
arXiv Detail & Related papers (2024-11-07T11:55:55Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Considering Fundamental Rights in the European Standardisation of Artificial Intelligence: Nonsense or Strategic Alliance? [0.0]
This chapter aims to clarify the relationship between AI standards and fundamental rights.
The main issue tackled is whether the adoption of AI harmonised standards, based on the future AI Act, should take into account fundamental rights.
arXiv Detail & Related papers (2024-01-23T10:17:42Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Quantitative study about the estimated impact of the AI Act [0.0]
We suggest a systematic approach that we applied on the initial draft of the AI Act that has been released in April 2021.
We went through several iterations of compiling the list of AI products and projects in and from Germany, which the Lernende Systeme platform lists.
It turns out that only about 30% of the AI systems considered would be regulated by the AI Act, the rest would be classified as low-risk.
arXiv Detail & Related papers (2023-03-29T06:23:16Z) - The Role of Large Language Models in the Recognition of Territorial
Sovereignty: An Analysis of the Construction of Legitimacy [67.44950222243865]
We argue that technology tools like Google Maps and Large Language Models (LLM) are often perceived as impartial and objective.
We highlight the case of three controversial territories: Crimea, West Bank and Transnitria, by comparing the responses of ChatGPT against Wikipedia information and United Nations resolutions.
arXiv Detail & Related papers (2023-03-17T08:46:49Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.