International Governance of Civilian AI: A Jurisdictional Certification
Approach
- URL: http://arxiv.org/abs/2308.15514v2
- Date: Mon, 11 Sep 2023 14:03:37 GMT
- Title: International Governance of Civilian AI: A Jurisdictional Certification
Approach
- Authors: Robert Trager, Ben Harack, Anka Reuel, Allison Carnegie, Lennart Heim,
Lewis Ho, Sarah Kreps, Ranjit Lall, Owen Larter, Se\'an \'O h\'Eigeartaigh,
Simon Staffell, Jos\'e Jaime Villalobos
- Abstract summary: This approach represents the extension of a standards, licensing, and liability regime to the global level.
We propose that states establish an International AI Organization (IAIO) to certify state jurisdictions for compliance with international oversight standards.
- Score: 4.5972119455877065
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This report describes trade-offs in the design of international governance
arrangements for civilian artificial intelligence (AI) and presents one
approach in detail. This approach represents the extension of a standards,
licensing, and liability regime to the global level. We propose that states
establish an International AI Organization (IAIO) to certify state
jurisdictions (not firms or AI projects) for compliance with international
oversight standards. States can give force to these international standards by
adopting regulations prohibiting the import of goods whose supply chains embody
AI from non-IAIO-certified jurisdictions. This borrows attributes from models
of existing international organizations, such as the International Civilian
Aviation Organization (ICAO), the International Maritime Organization (IMO),
and the Financial Action Task Force (FATF). States can also adopt multilateral
controls on the export of AI product inputs, such as specialized hardware, to
non-certified jurisdictions. Indeed, both the import and export standards could
be required for certification. As international actors reach consensus on risks
of and minimum standards for advanced AI, a jurisdictional certification regime
could mitigate a broad range of potential harms, including threats to public
safety.
Related papers
- Verifying International Agreements on AI: Six Layers of Verification for Rules on Large-Scale AI Development and Deployment [0.7364983833280243]
This report provides an in-depth overview of AI verification, intended for both policy professionals and technical researchers.<n>We present novel conceptual frameworks, detailed implementation options, and key R&D challenges.<n>We find that states could eventually verify compliance by using six largely independent verification approaches.
arXiv Detail & Related papers (2025-07-21T17:45:15Z) - Mechanisms to Verify International Agreements About AI Development [0.0]
Report aims to demonstrate how countries could practically verify claims about each other's AI development and deployment.<n>The focus is on international agreements and state-involved AI development, but these approaches could also be applied to domestic regulation of companies.
arXiv Detail & Related papers (2025-06-18T20:28:54Z) - International Security Applications of Flexible Hardware-Enabled Guarantees [0.0]
flexHEGs could enable internationally trustworthy AI governance by establishing standardized designs, robust ecosystem defenses, and clear operational parameters for AI-relevant chips.<n>We analyze four critical international security applications: limiting proliferation to address malicious use, implementing safety norms to prevent loss of control, managing risks from military AI systems, and supporting strategic stability through balance-of-power mechanisms while respecting national sovereignty.<n>Report addresses critical implementation challenges including technical thresholds for AI-relevant chips, management of existing non-flexHEG hardware, and safeguards against abuse of governance power.
arXiv Detail & Related papers (2025-06-18T03:10:49Z) - Enhancing Trust Through Standards: A Comparative Risk-Impact Framework for Aligning ISO AI Standards with Global Ethical and Regulatory Contexts [0.0]
ISO standards aim to foster responsible development by embedding fairness, transparency, and risk management into AI systems.
Their effectiveness varies across diverse regulatory landscapes, from the EU's risk-based AI Act to China's stability-focused measures.
This paper introduces a novel Comparative Risk-Impact Assessment Framework to evaluate how well ISO standards address ethical risks.
arXiv Detail & Related papers (2025-04-22T00:44:20Z) - AI threats to national security can be countered through an incident regime [55.2480439325792]
We propose a legally mandated post-deployment AI incident regime that aims to counter potential national security threats from AI systems.
Our proposed AI incident regime is split into three phases. The first phase revolves around a novel operationalization of what counts as an 'AI incident'
The second and third phases spell out that AI providers should notify a government agency about incidents, and that the government agency should be involved in amending AI providers' security and safety procedures.
arXiv Detail & Related papers (2025-03-25T17:51:50Z) - International Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty [0.0]
Malicious use or malfunction of advanced general-purpose AI (GPAI) poses risks that could lead to'marginalisation or extinction of humanity'
To address these risks, there are an increasing number of proposals for international agreements on AI safety.
We propose a treaty establishing a compute threshold above which development requires rigorous oversight.
arXiv Detail & Related papers (2025-03-18T16:29:57Z) - Position: A taxonomy for reporting and describing AI security incidents [57.98317583163334]
We argue that specific are required to describe and report security incidents of AI systems.
Existing frameworks for either non-AI security or generic AI safety incident reporting are insufficient to capture the specific properties of AI security.
arXiv Detail & Related papers (2024-12-19T13:50:26Z) - The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template [55.2480439325792]
Article aims to fill existing gaps in the theoretical and methodological elaboration of the Fundamental Rights Impact Assessment (FRIA)
This article outlines the main building blocks of a model template for the FRIA.
It can serve as a blueprint for other national and international regulatory initiatives to ensure that AI is fully consistent with human rights.
arXiv Detail & Related papers (2024-11-07T11:55:55Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - The Role of AI Safety Institutes in Contributing to International Standards for Frontier AI Safety [0.0]
We argue that the AI Safety Institutes (AISIs) are well-positioned to contribute to the international standard-setting processes for AI safety.
We propose and evaluate three models for involvement: Seoul Declaration Signatories, US (and other Seoul Declaration Signatories) and China, and Globally Inclusive.
arXiv Detail & Related papers (2024-09-17T16:12:54Z) - The potential functions of an international institution for AI safety. Insights from adjacent policy areas and recent trends [0.0]
The OECD, the G7, the G20, UNESCO, and the Council of Europe have already started developing frameworks for ethical and responsible AI governance.
This chapter reflects on what functions an international AI safety institute could perform.
arXiv Detail & Related papers (2024-08-31T10:04:53Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - International Institutions for Advanced AI [47.449762587672986]
International institutions may have an important role to play in ensuring advanced AI systems benefit humanity.
This paper identifies a set of governance functions that could be performed at an international level to address these challenges.
It groups these functions into four institutional models that exhibit internal synergies and have precedents in existing organizations.
arXiv Detail & Related papers (2023-07-10T16:55:55Z) - Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework [0.9622882291833615]
This paper proposes an alternative contextual, coherent, and commensurable (3C) framework for regulating artificial intelligence (AI)
To ensure contextuality, the framework bifurcates the AI life cycle into two phases: learning and deployment for specific tasks, instead of defining foundation or general-purpose models.
To ensure commensurability, the framework promotes the adoption of international standards for measuring and mitigating risks.
arXiv Detail & Related papers (2023-03-20T15:23:40Z) - Regulating Artificial Intelligence: Proposal for a Global Solution [6.037312672659089]
We argue that AI-related challenges cannot be tackled effectively without sincere international coordination.
We propose the establishment of an international AI governance framework organized around a new AI regulatory agency.
arXiv Detail & Related papers (2020-05-22T09:24:07Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.