Domestic frontier AI regulation, an IAEA for AI, an NPT for AI, and a US-led Allied Public-Private Partnership for AI: Four institutions for governing and developing frontier AI
- URL: http://arxiv.org/abs/2507.06379v1
- Date: Tue, 08 Jul 2025 20:32:28 GMT
- Title: Domestic frontier AI regulation, an IAEA for AI, an NPT for AI, and a US-led Allied Public-Private Partnership for AI: Four institutions for governing and developing frontier AI
- Authors: Haydn Belfield,
- Abstract summary: I explore four institutions for governing and developing frontier AI.<n>Domestic regimes could be harmonized and monitored through an IAEA for AI.<n>This could be backed up by a Secure Chips Agreement - a Non-Proliferation Treaty (NPT) for AI.<n> Frontier training runs could be carried out by a megaproject between the USA and its allies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Compute governance can underpin international institutions for the governance of frontier AI. To demonstrate this I explore four institutions for governing and developing frontier AI. Next steps for compute-indexed domestic frontier AI regulation could include risk assessments and pre-approvals, data centre usage reports, and release gate regulation. Domestic regimes could be harmonized and monitored through an International AI Agency - an International Atomic Energy Agency (IAEA) for AI. This could be backed up by a Secure Chips Agreement - a Non-Proliferation Treaty (NPT) for AI. This would be a non-proliferation regime for advanced chips, building on the chip export controls - states that do not have an IAIA-certified frontier regulation regime would not be allowed to import advanced chips. Frontier training runs could be carried out by a megaproject between the USA and its allies - a US-led Allied Public-Private Partnership for frontier AI. As a project to develop advanced AI, this could have significant advantages over alternatives led by Big Tech or particular states: it could be more legitimate, secure, safe, non-adversarial, peaceful, and less prone to misuse. For each of these four scenarios, a key incentive for participation is access to the advanced AI chips that are necessary for frontier training runs and large-scale inference. Together, they can create a situation in which governments can be reassured that frontier AI is developed and deployed in a secure manner with misuse minimised and benefits widely shared. Building these institutions may take years or decades, but progress is incremental and evolutionary and the first steps have already been taken.
Related papers
- Verifying International Agreements on AI: Six Layers of Verification for Rules on Large-Scale AI Development and Deployment [0.7364983833280243]
This report provides an in-depth overview of AI verification, intended for both policy professionals and technical researchers.<n>We present novel conceptual frameworks, detailed implementation options, and key R&D challenges.<n>We find that states could eventually verify compliance by using six largely independent verification approaches.
arXiv Detail & Related papers (2025-07-21T17:45:15Z) - From Turing to Tomorrow: The UK's Approach to AI Regulation [0.8339209730515343]
We argue for updated legal frameworks on copyright, discrimination, and AI agents.<n>If the UK gets AI regulation right, it could demonstrate how democratic societies can harness AI's benefits while managing its risks.
arXiv Detail & Related papers (2025-07-03T10:54:43Z) - AI threats to national security can be countered through an incident regime [55.2480439325792]
We propose a legally mandated post-deployment AI incident regime that aims to counter potential national security threats from AI systems.<n>Our proposed AI incident regime is split into three phases. The first phase revolves around a novel operationalization of what counts as an 'AI incident'<n>The second and third phases spell out that AI providers should notify a government agency about incidents, and that the government agency should be involved in amending AI providers' security and safety procedures.
arXiv Detail & Related papers (2025-03-25T17:51:50Z) - The AI Pentad, the CHARME$^{2}$D Model, and an Assessment of Current-State AI Regulation [5.231576332164012]
This paper aims to establish a unifying model for AI regulation from the perspective of core AI components.<n>We first introduce the AI Pentad, which comprises the five essential components of AI.<n>We then review AI regulatory enablers, including AI registration and disclosure, AI monitoring, and AI enforcement mechanisms.
arXiv Detail & Related papers (2025-03-08T22:58:41Z) - Superintelligence Strategy: Expert Version [64.7113737051525]
Destabilizing AI developments could raise the odds of great-power conflict.<n>Superintelligence -- AI vastly better than humans at nearly all cognitive tasks -- is now anticipated by AI researchers.<n>We introduce the concept of Mutual Assured AI Malfunction.
arXiv Detail & Related papers (2025-03-07T17:53:24Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Taking control: Policies to address extinction risks from AI [0.0]
We argue that voluntary commitments from AI companies would be an inappropriate and insufficient response.
We describe three policy proposals that would meaningfully address the threats from advanced AI.
arXiv Detail & Related papers (2023-10-31T15:53:14Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Oversight for Frontier AI through a Know-Your-Customer Scheme for
Compute Providers [0.8547032097715571]
Know-Your-Customer (KYC) is a standard developed by the banking sector to identify and verify client identity.
KYC could provide a mechanism for greater public oversight of frontier AI development and close loopholes in existing export controls.
Unlike the strategy of limiting access to AI chip purchases, regulating the digital access to compute offers more precise controls.
arXiv Detail & Related papers (2023-10-20T16:17:29Z) - Multinational AGI Consortium (MAGIC): A Proposal for International
Coordination on AI [0.0]
MAGIC would be the only institution in the world permitted to develop advanced AI.
We propose one positive vision of the future, where MAGIC, as a global governance regime, can lay the groundwork for long-term, safe regulation of advanced AI.
arXiv Detail & Related papers (2023-10-13T16:12:26Z) - International Institutions for Advanced AI [47.449762587672986]
International institutions may have an important role to play in ensuring advanced AI systems benefit humanity.
This paper identifies a set of governance functions that could be performed at an international level to address these challenges.
It groups these functions into four institutional models that exhibit internal synergies and have precedents in existing organizations.
arXiv Detail & Related papers (2023-07-10T16:55:55Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.