Risk assessment at AGI companies: A review of popular risk assessment
techniques from other safety-critical industries
- URL: http://arxiv.org/abs/2307.08823v1
- Date: Mon, 17 Jul 2023 20:36:51 GMT
- Title: Risk assessment at AGI companies: A review of popular risk assessment
techniques from other safety-critical industries
- Authors: Leonie Koessler, Jonas Schuett
- Abstract summary: Companies like OpenAI, Google DeepMind, and Anthropic have the stated goal of building artificial general intelligence (AGI)
There are increasing concerns that AGI would pose catastrophic risks.
This paper reviews popular risk assessment techniques from other safety-critical industries and suggests ways in which AGI companies could use them.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Companies like OpenAI, Google DeepMind, and Anthropic have the stated goal of
building artificial general intelligence (AGI) - AI systems that perform as
well as or better than humans on a wide variety of cognitive tasks. However,
there are increasing concerns that AGI would pose catastrophic risks. In light
of this, AGI companies need to drastically improve their risk management
practices. To support such efforts, this paper reviews popular risk assessment
techniques from other safety-critical industries and suggests ways in which AGI
companies could use them to assess catastrophic risks from AI. The paper
discusses three risk identification techniques (scenario analysis, fishbone
method, and risk typologies and taxonomies), five risk analysis techniques
(causal mapping, Delphi technique, cross-impact analysis, bow tie analysis, and
system-theoretic process analysis), and two risk evaluation techniques
(checklists and risk matrices). For each of them, the paper explains how they
work, suggests ways in which AGI companies could use them, discusses their
benefits and limitations, and makes recommendations. Finally, the paper
discusses when to conduct risk assessments, when to use which technique, and
how to use any of them. The reviewed techniques will be obvious to risk
management professionals in other industries. And they will not be sufficient
to assess catastrophic risks from AI. However, AGI companies should not skip
the straightforward step of reviewing best practices from other industries.
Related papers
- Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - Affirmative safety: An approach to risk management for high-risk AI [6.133009503054252]
We argue that entities developing or deploying high-risk AI systems should be required to present evidence of affirmative safety.
We propose a risk management approach for advanced AI in which model developers must provide evidence that their activities keep certain risks below regulator-set thresholds.
arXiv Detail & Related papers (2024-04-14T20:48:55Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - Towards more Practical Threat Models in Artificial Intelligence Security [66.67624011455423]
Recent works have identified a gap between research and practice in artificial intelligence security.
We revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice.
arXiv Detail & Related papers (2023-11-16T16:09:44Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI [11.240642213359267]
Many exhaustive taxonomy are possible, and some are useful -- particularly if they reveal new risks or practical approaches to safety.
This paper explores a taxonomy based on accountability: whose actions lead to the risk, are the actors unified, and are they deliberate?
We also provide stories to illustrate how the various risk types could each play out, including risks arising from unanticipated interactions of many AI systems, and risks from deliberate misuse.
arXiv Detail & Related papers (2023-06-12T07:55:18Z) - Quantitative AI Risk Assessments: Opportunities and Challenges [9.262092738841979]
AI-based systems are increasingly being leveraged to provide value to organizations, individuals, and society.
Risks have led to proposed regulations, litigation, and general societal concerns.
This paper explores the concept of a quantitative AI Risk Assessment.
arXiv Detail & Related papers (2022-09-13T21:47:25Z) - X-Risk Analysis for AI Research [24.78742908726579]
We provide a guide for how to analyze AI x-risk.
First, we review how systems can be made safer today.
Next, we discuss strategies for having long-term impacts on the safety of future systems.
arXiv Detail & Related papers (2022-06-13T00:22:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.