The Illusory Normativity of Rights-Based AI Regulation
- URL: http://arxiv.org/abs/2503.05784v2
- Date: Tue, 12 Aug 2025 23:37:16 GMT
- Title: The Illusory Normativity of Rights-Based AI Regulation
- Authors: Yiyang Mei, Matthew Sag,
- Abstract summary: We argue that the rights-based narrative surrounding EU AI regulation mischaracterizes the logic of its institutional design.<n>Our aim is not to endorse the American model but to reject the presumption that the EU approach reflects a normative ideal.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Whether and how to regulate AI is now a central question of governance. Across academic, policy, and international legal circles, the European Union is widely treated as the normative leader in this space. Its regulatory framework, anchored in the General Data Protection Regulation, the Digital Services and Markets Acts, and the AI Act, is often portrayed as a principled model grounded in fundamental rights. This Article challenges that assumption. We argue that the rights-based narrative surrounding EU AI regulation mischaracterizes the logic of its institutional design. While rights language pervades EU legal instruments, its function is managerial, not foundational. These rights operate as tools of administrative ordering, used to mitigate technological disruption, manage geopolitical risk, and preserve systemic balance, rather than as expressions of moral autonomy or democratic consent. Drawing on comparative institutional analysis, we situate EU AI governance within a longer tradition of legal ordering shaped by the need to coordinate power across fragmented jurisdictions. We contrast this approach with the American model, which reflects a different regulatory logic rooted in decentralized authority, sectoral pluralism, and a constitutional preference for innovation and individual autonomy. Through case studies in five key domains -- data privacy, cybersecurity, healthcare, labor, and disinformation -- we show that EU regulation is not meaningfully rights-driven, as is often claimed. It is instead structured around the containment of institutional risk. Our aim is not to endorse the American model but to reject the presumption that the EU approach reflects a normative ideal that other nations should uncritically adopt. The EU model is best understood as a historically contingent response to its own political conditions, not a template for others to blindly follow.
Related papers
- Pluralism in AI Governance: Toward Sociotechnical Alignment and Normative Coherence [0.16921396880325779]
The study synthesises frameworks including Full-Stack Alignment, Thick Models of Value, Value Sensitive Design, and Public Constitutional AI.<n>It introduces a layered framework linking values, mechanisms, and strategies, and maps tensions such as fairness versus efficiency, transparency versus security, and privacy versus equity.<n>The study contributes a holistic, value-sensitive model of AI governance, reframing regulation as a proactive mechanism for embedding public values into sociotechnical systems.
arXiv Detail & Related papers (2026-02-04T14:28:56Z) - From Abstract Threats to Institutional Realities: A Comparative Semantic Network Analysis of AI Securitisation in the US, EU, and China [0.0]
Major jurisdictions converge rhetorically around concepts such as safety, risk, and accountability, but their regulatory frameworks remain fundamentally divergent and mutually unintelligible.<n>This paper argues that this fragmentation cannot be explained solely by geopolitical rivalry, institutional complexity, or instrument selection.<n>Using semantic network analysis, we trace how concepts like safety are embedded within divergent semantic architectures.
arXiv Detail & Related papers (2026-01-07T17:12:03Z) - The Artificial Intelligence Value Chain: A Critical Appraisal. [Spanish Version] [0.0]
The artificial intelligence value chain is one of the main concepts underpinning the European legislation on the subject.<n>It is an economic concept that has become a legal one.<n>It proposes a framework for the analysis of the ethical and legal AI value chain to preserve democratic values and foster the digital implementation of the rule of law.
arXiv Detail & Related papers (2025-12-21T15:53:44Z) - Comparing Apples to Oranges: A Taxonomy for Navigating the Global Landscape of AI Regulation [0.0]
We present a taxonomy to map the global landscape of AI regulation.<n>We apply this framework to five early movers: the European Union's AI Act, the United States' Executive Order 14110, Canada's AI and Data Act, China's Interim Measures for Generative AI Services, and Brazil's AI Bill 2338/2023.
arXiv Detail & Related papers (2025-05-19T19:23:41Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.
Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - Pitfalls of Evidence-Based AI Policy [13.370321579091387]
We argue that if the goal is evidence-based AI policy, the first regulatory objective must be to actively facilitate the process of identifying, studying, and deliberating about AI risks.<n>We discuss a set of 15 regulatory goals to facilitate this and show that Brazil, Canada, China, the EU, South Korea, the UK, and the USA all have substantial opportunities to adopt further evidence-seeking policies.
arXiv Detail & Related papers (2025-02-13T18:59:30Z) - The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template [55.2480439325792]
Article aims to fill existing gaps in the theoretical and methodological elaboration of the Fundamental Rights Impact Assessment (FRIA)<n>This article outlines the main building blocks of a model template for the FRIA.<n>It can serve as a blueprint for other national and international regulatory initiatives to ensure that AI is fully consistent with human rights.
arXiv Detail & Related papers (2024-11-07T11:55:55Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - The Artificial Intelligence Act: critical overview [0.0]
This article provides a critical overview of the recently approved Artificial Intelligence Act.
It starts by presenting the main structure, objectives, and approach of Regulation (EU) 2024/1689.
The text concludes that even if the overall framework can be deemed adequate and balanced, the approach is so complex that it risks defeating its own purpose.
arXiv Detail & Related papers (2024-08-30T21:38:02Z) - Considering Fundamental Rights in the European Standardisation of Artificial Intelligence: Nonsense or Strategic Alliance? [0.0]
This chapter aims to clarify the relationship between AI standards and fundamental rights.
The main issue tackled is whether the adoption of AI harmonised standards, based on the future AI Act, should take into account fundamental rights.
arXiv Detail & Related papers (2024-01-23T10:17:42Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Quantitative study about the estimated impact of the AI Act [0.0]
We suggest a systematic approach that we applied on the initial draft of the AI Act that has been released in April 2021.
We went through several iterations of compiling the list of AI products and projects in and from Germany, which the Lernende Systeme platform lists.
It turns out that only about 30% of the AI systems considered would be regulated by the AI Act, the rest would be classified as low-risk.
arXiv Detail & Related papers (2023-03-29T06:23:16Z) - The Role of Large Language Models in the Recognition of Territorial
Sovereignty: An Analysis of the Construction of Legitimacy [67.44950222243865]
We argue that technology tools like Google Maps and Large Language Models (LLM) are often perceived as impartial and objective.
We highlight the case of three controversial territories: Crimea, West Bank and Transnitria, by comparing the responses of ChatGPT against Wikipedia information and United Nations resolutions.
arXiv Detail & Related papers (2023-03-17T08:46:49Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.