Confronting Catastrophic Risk: The International Obligation to Regulate Artificial Intelligence
- URL: http://arxiv.org/abs/2503.18983v1
- Date: Sun, 23 Mar 2025 06:24:45 GMT
- Title: Confronting Catastrophic Risk: The International Obligation to Regulate Artificial Intelligence
- Authors: Bryan Druzin, Anatole Boute, Michael Ramsden,
- Abstract summary: We argue that there exists an international obligation to mitigate the threat of human extinction by AI.<n>We argue that there is a positive obligation on states under the right to life within international human rights law to proactively take regulatory action to mitigate the potential existential risk of AI.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While artificial intelligence (AI) holds enormous promise, many experts in the field are warning that there is a non-trivial chance that the development of AI poses an existential threat to humanity. Existing regulatory initiative do not address this threat but merely instead focus on discrete AI-related risks such as consumer safety, cybersecurity, data protection, and privacy. In the absence of regulatory action to address the possible risk of human extinction by AI, the question arises: What legal obligations, if any, does public international law impose on states to regulate its development. Grounded in the precautionary principle, we argue that there exists an international obligation to mitigate the threat of human extinction by AI. Often invoked in relation to environmental regulation and the regulation of potentially harmful technologies, the principle holds that in situations where there is the potential for significant harm, even in the absence of full scientific certainty, preventive measures should not be postponed if delayed action may result in irreversible consequences. We argue that the precautionary principle is a general principle of international law and, therefore, that there is a positive obligation on states under the right to life within international human rights law to proactively take regulatory action to mitigate the potential existential risk of AI. This is significant because, if an international obligation to regulate the development of AI can be established under international law, then the basic legal framework would be in place to address this evolving threat.
Related papers
- A proposal for an incident regime that tracks and counters threats to national security posed by AI systems [55.2480439325792]
We propose a legally mandated post-deployment AI incident regie that aims to counter potential national security threats from AI systems.<n>Our proposal is timely, given ongoing policy interest in the potential national security threats posed by AI systems.
arXiv Detail & Related papers (2025-03-25T17:51:50Z) - Position: AI agents should be regulated based on autonomous action sequences [0.0]
We argue that AI agents should be regulated based on the sequence of actions they autonomously take.
We discuss relevant regulations and recommendations from AI scientists regarding existential risks.
arXiv Detail & Related papers (2025-02-07T09:40:48Z) - Securing the AI Frontier: Urgent Ethical and Regulatory Imperatives for AI-Driven Cybersecurity [0.0]
This paper critically examines the evolving ethical and regulatory challenges posed by the integration of artificial intelligence in cybersecurity.<n>We trace the historical development of AI regulation, highlighting major milestones from theoretical discussions in the 1940s to the implementation of recent global frameworks such as the European Union AI Act.<n>Ethical concerns such as bias, transparency, accountability, privacy, and human oversight are explored in depth, along with their implications for AI-driven cybersecurity systems.
arXiv Detail & Related papers (2025-01-15T18:17:37Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - International Institutions for Advanced AI [47.449762587672986]
International institutions may have an important role to play in ensuring advanced AI systems benefit humanity.
This paper identifies a set of governance functions that could be performed at an international level to address these challenges.
It groups these functions into four institutional models that exhibit internal synergies and have precedents in existing organizations.
arXiv Detail & Related papers (2023-07-10T16:55:55Z) - Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework [0.9622882291833615]
This paper proposes an alternative contextual, coherent, and commensurable (3C) framework for regulating artificial intelligence (AI)
To ensure contextuality, the framework bifurcates the AI life cycle into two phases: learning and deployment for specific tasks, instead of defining foundation or general-purpose models.
To ensure commensurability, the framework promotes the adoption of international standards for measuring and mitigating risks.
arXiv Detail & Related papers (2023-03-20T15:23:40Z) - Voluntary safety commitments provide an escape from over-regulation in
AI development [8.131948859165432]
This work reveals for the first time how voluntary commitments, with sanctions either by peers or an institution, leads to socially beneficial outcomes.
Results are directly relevant for the design of governance and regulatory policies that aim to ensure an ethical and responsible AI technology development process.
arXiv Detail & Related papers (2021-04-08T12:54:56Z) - Regulating Artificial Intelligence: Proposal for a Global Solution [6.037312672659089]
We argue that AI-related challenges cannot be tackled effectively without sincere international coordination.
We propose the establishment of an international AI governance framework organized around a new AI regulatory agency.
arXiv Detail & Related papers (2020-05-22T09:24:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.