Taking control: Policies to address extinction risks from AI
- URL: http://arxiv.org/abs/2310.20563v1
- Date: Tue, 31 Oct 2023 15:53:14 GMT
- Title: Taking control: Policies to address extinction risks from AI
- Authors: Andrea Miotti and Akash Wasil
- Abstract summary: We argue that voluntary commitments from AI companies would be an inappropriate and insufficient response.
We describe three policy proposals that would meaningfully address the threats from advanced AI.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper provides policy recommendations to reduce extinction risks from
advanced artificial intelligence (AI). First, we briefly provide background
information about extinction risks from AI. Second, we argue that voluntary
commitments from AI companies would be an inappropriate and insufficient
response. Third, we describe three policy proposals that would meaningfully
address the threats from advanced AI: (1) establishing a Multinational AGI
Consortium to enable democratic oversight of advanced AI (MAGIC), (2)
implementing a global cap on the amount of computing power used to train an AI
system (global compute cap), and (3) requiring affirmative safety evaluations
to ensure that risks are kept below acceptable levels (gating critical
experiments). MAGIC would be a secure, safety-focused, internationally-governed
institution responsible for reducing risks from advanced AI and performing
research to safely harness the benefits of AI. MAGIC would also maintain
emergency response infrastructure (kill switch) to swiftly halt AI development
or withdraw model deployment in the event of an AI-related emergency. The
global compute cap would end the corporate race toward dangerous AI systems
while enabling the vast majority of AI innovation to continue unimpeded. Gating
critical experiments would ensure that companies developing powerful AI systems
are required to present affirmative evidence that these models keep extinction
risks below an acceptable threshold. After describing these recommendations, we
propose intermediate steps that the international community could take to
implement these proposals and lay the groundwork for international coordination
around advanced AI.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Societal Adaptation to Advanced AI [1.2607853680700076]
Existing strategies for managing risks from advanced AI systems often focus on affecting what AI systems are developed and how they diffuse.
We urge a complementary approach: increasing societal adaptation to advanced AI.
We introduce a conceptual framework which helps identify adaptive interventions that avoid, defend against and remedy potentially harmful uses of AI systems.
arXiv Detail & Related papers (2024-05-16T17:52:12Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - An International Consortium for Evaluations of Societal-Scale Risks from
Advanced AI [10.550015825854837]
A regulatory gap has permitted AI labs to conduct research, development, and deployment activities with minimal oversight.
frontier AI system evaluations have been proposed as a way of assessing risks from the development and deployment of frontier AI systems.
This paper proposes a solution in the form of an international consortium for AI risk evaluations, comprising both AI developers and third-party AI risk evaluators.
arXiv Detail & Related papers (2023-10-22T23:37:48Z) - Multinational AGI Consortium (MAGIC): A Proposal for International
Coordination on AI [0.0]
MAGIC would be the only institution in the world permitted to develop advanced AI.
We propose one positive vision of the future, where MAGIC, as a global governance regime, can lay the groundwork for long-term, safe regulation of advanced AI.
arXiv Detail & Related papers (2023-10-13T16:12:26Z) - An Overview of Catastrophic AI Risks [38.84933208563934]
This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories.
Malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs.
organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents.
rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans.
arXiv Detail & Related papers (2023-06-21T03:35:06Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.