Risk assessment at AGI companies: A review of popular risk assessment
techniques from other safety-critical industries
- URL: http://arxiv.org/abs/2307.08823v1
- Date: Mon, 17 Jul 2023 20:36:51 GMT
- Title: Risk assessment at AGI companies: A review of popular risk assessment
techniques from other safety-critical industries
- Authors: Leonie Koessler, Jonas Schuett
- Abstract summary: Companies like OpenAI, Google DeepMind, and Anthropic have the stated goal of building artificial general intelligence (AGI)
There are increasing concerns that AGI would pose catastrophic risks.
This paper reviews popular risk assessment techniques from other safety-critical industries and suggests ways in which AGI companies could use them.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Companies like OpenAI, Google DeepMind, and Anthropic have the stated goal of
building artificial general intelligence (AGI) - AI systems that perform as
well as or better than humans on a wide variety of cognitive tasks. However,
there are increasing concerns that AGI would pose catastrophic risks. In light
of this, AGI companies need to drastically improve their risk management
practices. To support such efforts, this paper reviews popular risk assessment
techniques from other safety-critical industries and suggests ways in which AGI
companies could use them to assess catastrophic risks from AI. The paper
discusses three risk identification techniques (scenario analysis, fishbone
method, and risk typologies and taxonomies), five risk analysis techniques
(causal mapping, Delphi technique, cross-impact analysis, bow tie analysis, and
system-theoretic process analysis), and two risk evaluation techniques
(checklists and risk matrices). For each of them, the paper explains how they
work, suggests ways in which AGI companies could use them, discusses their
benefits and limitations, and makes recommendations. Finally, the paper
discusses when to conduct risk assessments, when to use which technique, and
how to use any of them. The reviewed techniques will be obvious to risk
management professionals in other industries. And they will not be sufficient
to assess catastrophic risks from AI. However, AGI companies should not skip
the straightforward step of reviewing best practices from other industries.
Related papers
- Mapping Technical Safety Research at AI Companies: A literature review and incentives analysis [0.0]
Report analyzes the technical research into safe AI development being conducted by three leading AI companies.
Anthropic, Google DeepMind, and OpenAI.
We defined safe AI development as developing AI systems that are unlikely to pose large-scale misuse or accident risks.
arXiv Detail & Related papers (2024-09-12T09:34:55Z) - Risks and NLP Design: A Case Study on Procedural Document QA [52.557503571760215]
We argue that clearer assessments of risks and harms to users will be possible when we specialize the analysis to more concrete applications and their plausible users.
We conduct a risk-oriented error analysis that could then inform the design of a future system to be deployed with lower risk of harm and better performance.
arXiv Detail & Related papers (2024-08-16T17:23:43Z) - The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks From Artificial Intelligence [35.77247656798871]
The risks posed by Artificial Intelligence (AI) are of considerable concern to academics, auditors, policymakers, AI companies, and the public.
A lack of shared understanding of AI risks can impede our ability to comprehensively discuss, research, and react to them.
This paper addresses this gap by creating an AI Risk Repository to serve as a common frame of reference.
arXiv Detail & Related papers (2024-08-14T10:32:06Z) - EAIRiskBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [47.69642609574771]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EAIRiskBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Assessing the State of AI Policy [0.5156484100374057]
This work provides an overview of AI legislation and directives at the international, U.S. state, city and federal levels.
It also reviews relevant business standards, and technical society initiatives.
arXiv Detail & Related papers (2024-07-31T16:09:25Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Quantitative AI Risk Assessments: Opportunities and Challenges [9.262092738841979]
AI-based systems are increasingly being leveraged to provide value to organizations, individuals, and society.
Risks have led to proposed regulations, litigation, and general societal concerns.
This paper explores the concept of a quantitative AI Risk Assessment.
arXiv Detail & Related papers (2022-09-13T21:47:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.