An International Consortium for Evaluations of Societal-Scale Risks from
Advanced AI
- URL: http://arxiv.org/abs/2310.14455v3
- Date: Mon, 6 Nov 2023 18:52:31 GMT
- Title: An International Consortium for Evaluations of Societal-Scale Risks from
Advanced AI
- Authors: Ross Gruetzemacher, Alan Chan, Kevin Frazier, Christy Manning,
\v{S}t\v{e}p\'an Los, James Fox, Jos\'e Hern\'andez-Orallo, John Burden,
Matija Franklin, Cl\'iodhna N\'i Ghuidhir, Mark Bailey, Daniel Eth, Toby
Pilditch, Kyle Kilian
- Abstract summary: A regulatory gap has permitted AI labs to conduct research, development, and deployment activities with minimal oversight.
frontier AI system evaluations have been proposed as a way of assessing risks from the development and deployment of frontier AI systems.
This paper proposes a solution in the form of an international consortium for AI risk evaluations, comprising both AI developers and third-party AI risk evaluators.
- Score: 10.550015825854837
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Given rapid progress toward advanced AI and risks from frontier AI systems
(advanced AI systems pushing the boundaries of the AI capabilities frontier),
the creation and implementation of AI governance and regulatory schemes
deserves prioritization and substantial investment. However, the status quo is
untenable and, frankly, dangerous. A regulatory gap has permitted AI labs to
conduct research, development, and deployment activities with minimal
oversight. In response, frontier AI system evaluations have been proposed as a
way of assessing risks from the development and deployment of frontier AI
systems. Yet, the budding AI risk evaluation ecosystem faces significant
coordination challenges, such as a limited diversity of evaluators, suboptimal
allocation of effort, and perverse incentives. This paper proposes a solution
in the form of an international consortium for AI risk evaluations, comprising
both AI developers and third-party AI risk evaluators. Such a consortium could
play a critical role in international efforts to mitigate societal-scale risks
from advanced AI, including in managing responsible scaling policies and
coordinated evaluation-based risk response. In this paper, we discuss the
current evaluation ecosystem and its shortcomings, propose an international
consortium for advanced AI risk evaluations, discuss issues regarding its
implementation, discuss lessons that can be learnt from previous international
institutions and existing proposals for international AI governance
institutions, and, finally, we recommend concrete steps to advance the
establishment of the proposed consortium: (i) solicit feedback from
stakeholders, (ii) conduct additional research, (iii) conduct a workshop(s) for
stakeholders, (iv) analyze feedback and create final proposal, (v) solicit
funding, and (vi) create a consortium.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Responsible AI Question Bank: A Comprehensive Tool for AI Risk Assessment [17.026921603767722]
The study introduces our Responsible AI (RAI) Question Bank, a comprehensive framework and tool designed to support diverse AI initiatives.
By integrating AI ethics principles such as fairness, transparency, and accountability into a structured question format, the RAI Question Bank aids in identifying potential risks.
arXiv Detail & Related papers (2024-08-02T22:40:20Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Taking control: Policies to address extinction risks from AI [0.0]
We argue that voluntary commitments from AI companies would be an inappropriate and insufficient response.
We describe three policy proposals that would meaningfully address the threats from advanced AI.
arXiv Detail & Related papers (2023-10-31T15:53:14Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - AI Hazard Management: A framework for the systematic management of root
causes for AI risks [0.0]
This paper introduces the AI Hazard Management (AIHM) framework.
It provides a structured process to systematically identify, assess, and treat AI hazards.
It builds upon an AI hazard list from a comprehensive state-of-the-art analysis.
arXiv Detail & Related papers (2023-10-25T15:55:50Z) - International Institutions for Advanced AI [47.449762587672986]
International institutions may have an important role to play in ensuring advanced AI systems benefit humanity.
This paper identifies a set of governance functions that could be performed at an international level to address these challenges.
It groups these functions into four institutional models that exhibit internal synergies and have precedents in existing organizations.
arXiv Detail & Related papers (2023-07-10T16:55:55Z) - QB4AIRA: A Question Bank for AI Risk Assessment [19.783485414942284]
QB4AIRA comprises 293 prioritized questions covering a wide range of AI risk areas.
It serves as a valuable resource for stakeholders in assessing and managing AI risks.
arXiv Detail & Related papers (2023-05-16T09:18:44Z) - Quantitative AI Risk Assessments: Opportunities and Challenges [9.262092738841979]
AI-based systems are increasingly being leveraged to provide value to organizations, individuals, and society.
Risks have led to proposed regulations, litigation, and general societal concerns.
This paper explores the concept of a quantitative AI Risk Assessment.
arXiv Detail & Related papers (2022-09-13T21:47:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.