From Abstract Threats to Institutional Realities: A Comparative Semantic Network Analysis of AI Securitisation in the US, EU, and China
- URL: http://arxiv.org/abs/2601.04107v1
- Date: Wed, 07 Jan 2026 17:12:03 GMT
- Title: From Abstract Threats to Institutional Realities: A Comparative Semantic Network Analysis of AI Securitisation in the US, EU, and China
- Authors: Ruiyi Guo, Bodong Zhang,
- Abstract summary: Major jurisdictions converge rhetorically around concepts such as safety, risk, and accountability, but their regulatory frameworks remain fundamentally divergent and mutually unintelligible.<n>This paper argues that this fragmentation cannot be explained solely by geopolitical rivalry, institutional complexity, or instrument selection.<n>Using semantic network analysis, we trace how concepts like safety are embedded within divergent semantic architectures.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial intelligence governance exhibits a striking paradox: while major jurisdictions converge rhetorically around concepts such as safety, risk, and accountability, their regulatory frameworks remain fundamentally divergent and mutually unintelligible. This paper argues that this fragmentation cannot be explained solely by geopolitical rivalry, institutional complexity, or instrument selection. Instead, it stems from how AI is constituted as an object of governance through distinct institutional logics. Integrating securitisation theory with the concept of the dispositif, we demonstrate that jurisdictions govern ontologically different objects under the same vocabulary. Using semantic network analysis of official policy texts from the European Union, the United States, and China (2023-2025), we trace how concepts like safety are embedded within divergent semantic architectures. Our findings reveal that the EU juridifies AI as a certifiable product through legal-bureaucratic logic; the US operationalises AI as an optimisable system through market-liberal logic; and China governs AI as socio-technical infrastructure through holistic state logic. We introduce the concept of structural incommensurability to describe this condition of ontological divergence masked by terminological convergence. This reframing challenges ethics-by-principles approaches to global AI governance, suggesting that coordination failures arise not from disagreement over values but from the absence of a shared reference object.
Related papers
- Pluralism in AI Governance: Toward Sociotechnical Alignment and Normative Coherence [0.16921396880325779]
The study synthesises frameworks including Full-Stack Alignment, Thick Models of Value, Value Sensitive Design, and Public Constitutional AI.<n>It introduces a layered framework linking values, mechanisms, and strategies, and maps tensions such as fairness versus efficiency, transparency versus security, and privacy versus equity.<n>The study contributes a holistic, value-sensitive model of AI governance, reframing regulation as a proactive mechanism for embedding public values into sociotechnical systems.
arXiv Detail & Related papers (2026-02-04T14:28:56Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - Deciding how to respond: A deliberative framework to guide policymaker responses to AI systems [0.0]
We argue that by operationalising the concept of freedom, a complementary approach can be developed.<n>The resulting framework is structured around coordinative, communicative and decision spaces.
arXiv Detail & Related papers (2025-08-05T17:25:14Z) - Epistemic Scarcity: The Economics of Unresolvable Unknowns [0.0]
We argue that AI systems are incapable of performing the core functions of economic coordination.<n>We critique dominant ethical AI frameworks as extensions of constructivist rationalism.
arXiv Detail & Related papers (2025-07-02T08:46:24Z) - Resource Rational Contractualism Should Guide AI Alignment [69.07915246220985]
Contractualist alignment proposes grounding decisions in agreements that diverse stakeholders would endorse.<n>We propose Resource-Rationalism: a framework where AI systems approximate the agreements rational parties would form.<n>An RRC-aligned agent would not only operate efficiently, but also be equipped to dynamically adapt to and interpret the ever-changing human social world.
arXiv Detail & Related papers (2025-06-20T18:57:13Z) - Unlocking the Black Box: Analysing the EU Artificial Intelligence Act's Framework for Explainability in AI [0.0]
The need for eXplainable AI (XAI) is evident in fields such as healthcare, credit scoring, policing and the criminal justice system.<n>At the EU level, the notion of explainability is one of the fundamental principles that underpin the AI Act.<n>This paper explores various approaches and techniques that promise to advance XAI, as well as the challenges of implementing the principle of explainability in AI governance and policies.
arXiv Detail & Related papers (2025-01-24T16:30:19Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - False Sense of Security in Explainable Artificial Intelligence (XAI) [3.298597939573779]
We argue that AI regulations and current market conditions threaten effective AI governance and safety.
Unless governments explicitly tackle the issue of explainability through clear legislative and policy statements, AI governance risks becoming a vacuous "box-ticking" exercise.
arXiv Detail & Related papers (2024-05-06T20:02:07Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework [0.9622882291833615]
This paper proposes an alternative contextual, coherent, and commensurable (3C) framework for regulating artificial intelligence (AI)
To ensure contextuality, the framework bifurcates the AI life cycle into two phases: learning and deployment for specific tasks, instead of defining foundation or general-purpose models.
To ensure commensurability, the framework promotes the adoption of international standards for measuring and mitigating risks.
arXiv Detail & Related papers (2023-03-20T15:23:40Z) - The Role of Large Language Models in the Recognition of Territorial
Sovereignty: An Analysis of the Construction of Legitimacy [67.44950222243865]
We argue that technology tools like Google Maps and Large Language Models (LLM) are often perceived as impartial and objective.
We highlight the case of three controversial territories: Crimea, West Bank and Transnitria, by comparing the responses of ChatGPT against Wikipedia information and United Nations resolutions.
arXiv Detail & Related papers (2023-03-17T08:46:49Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.