The Nuclear Analogy in AI Governance Research
- URL: http://arxiv.org/abs/2510.21203v1
- Date: Fri, 24 Oct 2025 07:09:50 GMT
- Title: The Nuclear Analogy in AI Governance Research
- Authors: Sophia Hatz,
- Abstract summary: The analogy between Artificial Intelligence (AI) and nuclear weapons is prominent in academic and policy discourse on AI governance.<n>This chapter reviews 43 scholarly works which explicitly draw on the nuclear domain to derive lessons for AI governance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The analogy between Artificial Intelligence (AI) and nuclear weapons is prominent in academic and policy discourse on AI governance. This chapter reviews 43 scholarly works which explicitly draw on the nuclear domain to derive lessons for AI governance. We identify four problem areas where researchers apply nuclear precedents: (1) early development and governance of transformative technologies; (2) international security risks and strategy; (3) international institutions and agreements; and (4) domestic safety regulation. While nuclear-inspired AI proposals are often criticised due to differences across domains, this review clarifies how historical analogies can inform policy development even when technological domains differ substantially. Valuable functions include providing conceptual frameworks for analyzing strategic dynamics, offering cautionary lessons about unsuccessful governance approaches, and expanding policy imagination by legitimizing radical proposals. Given that policymakers already invoke the nuclear analogy, continued critical engagement with these historical precedents remains essential for shaping effective global AI governance.
Related papers
- Reproducibility: The New Frontier in AI Governance [2.1485350418225244]
We argue that the current publication speeds in AI combined with the lack of strong scientific standards, via weak protocols, effectively erodes the power of policymakers to enact meaningful policy and governance protocols.<n>We evaluate the forthcoming crisis within AI research through the lens of crises in other scientific domains.<n>We argue that policymakers and governments must consider protocols as a core tool in the governance arsenal and demand higher standards for AI research.
arXiv Detail & Related papers (2025-10-13T16:34:25Z) - Economic Competition, EU Regulation, and Executive Orders: A Framework for Discussing AI Policy Implications in CS Courses [5.898240245765167]
We argue that discussions of the implications of AI policy are not yet present in the computer science curriculum.<n>We propose guiding questions to frame class discussions around AI policy in technical and non-technical (e.g., ethics) CS courses.
arXiv Detail & Related papers (2025-09-29T21:26:53Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - Advancing Science- and Evidence-based AI Policy [163.43609502905707]
This paper tackles the problem of how to optimize the relationship between evidence and policy to address the opportunities and challenges of AI.<n>An increasing number of efforts address this problem by often either (i) contributing research into the risks of AI and their effective mitigation or (ii) advocating for policy to address these risks.
arXiv Detail & Related papers (2025-08-02T23:20:58Z) - The California Report on Frontier AI Policy [110.35302787349856]
Continued progress in frontier AI carries the potential for profound advances in scientific discovery, economic productivity, and broader social well-being.<n>As the epicenter of global AI innovation, California has a unique opportunity to continue supporting developments in frontier AI.<n>Report derives policy principles that can inform how California approaches the use, assessment, and governance of frontier AI.
arXiv Detail & Related papers (2025-06-17T23:33:21Z) - Promising Topics for U.S.-China Dialogues on AI Risks and Governance [0.0]
Despite strategic competition, there exist concrete opportunities for bilateral U.S.-China cooperation in the development of responsible AI.<n>We analyze more than 40 primary AI policy and corporate governance documents from both nations.<n>Our analysis contributes to understanding how different international governance frameworks might be harmonized to promote global responsible AI development.
arXiv Detail & Related papers (2025-05-12T11:56:19Z) - Towards an AI Observatory for the Nuclear Sector: A tool for anticipatory governance [0.0]
We call for the creation of an anticipatory system of governance for AI in the nuclear sector.<n>The paper explores the contours of the nuclear AI observatory and an anticipatory system of governance.
arXiv Detail & Related papers (2025-04-16T03:43:15Z) - The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template [55.2480439325792]
Article aims to fill existing gaps in the theoretical and methodological elaboration of the Fundamental Rights Impact Assessment (FRIA)<n>This article outlines the main building blocks of a model template for the FRIA.<n>It can serve as a blueprint for other national and international regulatory initiatives to ensure that AI is fully consistent with human rights.
arXiv Detail & Related papers (2024-11-07T11:55:55Z) - Open Problems in Technical AI Governance [102.19067750759471]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.<n>This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - AI Federalism: Shaping AI Policy within States in Germany [0.0]
Recent AI governance research has focused heavily on the analysis of strategy papers and ethics guidelines for AI published by national governments and international bodies.
Subnational institutions have also published documents on Artificial Intelligence, yet these have been largely absent from policy analyses.
This is surprising because AI is connected to many policy areas, such as economic or research policy, where the competences are already distributed between the national and subnational level.
arXiv Detail & Related papers (2021-10-28T16:06:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.