AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies
- URL: http://arxiv.org/abs/2406.17864v1
- Date: Tue, 25 Jun 2024 18:13:05 GMT
- Title: AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies
- Authors: Yi Zeng, Kevin Klyman, Andy Zhou, Yu Yang, Minzhou Pan, Ruoxi Jia, Dawn Song, Percy Liang, Bo Li,
- Abstract summary: We identify 314 unique risk categories organized into a four-tiered taxonomy.
At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks.
We aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.
- Score: 88.32153122712478
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a comprehensive AI risk taxonomy derived from eight government policies from the European Union, United States, and China and 16 company policies worldwide, making a significant step towards establishing a unified language for generative AI safety evaluation. We identify 314 unique risk categories organized into a four-tiered taxonomy. At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks. The taxonomy establishes connections between various descriptions and approaches to risk, highlighting the overlaps and discrepancies between public and private sector conceptions of risk. By providing this unified framework, we aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.
Related papers
- AI threats to national security can be countered through an incident regime [55.2480439325792]
We propose a legally mandated post-deployment AI incident regime that aims to counter potential national security threats from AI systems.
Our proposed AI incident regime is split into three phases. The first phase revolves around a novel operationalization of what counts as an 'AI incident'
The second and third phases spell out that AI providers should notify a government agency about incidents, and that the government agency should be involved in amending AI providers' security and safety procedures.
arXiv Detail & Related papers (2025-03-25T17:51:50Z) - AI Risk Atlas: Taxonomy and Tooling for Navigating AI Risks and Resources [24.502423087280008]
We introduce the AI Risk Atlas, a structured taxonomy that consolidates AI risks from diverse sources and aligns them with governance frameworks.
We also present the Risk Atlas Nexus, a collection of open-source tools designed to bridge the divide between risk definitions, benchmarks, datasets, and mitigation strategies.
arXiv Detail & Related papers (2025-02-26T12:23:14Z) - AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons [62.374792825813394]
This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability.
The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories.
arXiv Detail & Related papers (2025-02-19T05:58:52Z) - Position: A taxonomy for reporting and describing AI security incidents [57.98317583163334]
We argue that specific are required to describe and report security incidents of AI systems.
Existing frameworks for either non-AI security or generic AI safety incident reporting are insufficient to capture the specific properties of AI security.
arXiv Detail & Related papers (2024-12-19T13:50:26Z) - A Taxonomy of Systemic Risks from General-Purpose AI [2.5956465292067867]
We consider systemic risks as large-scale threats that can affect entire societies or economies.
Key sources of systemic risk emerge from knowledge gaps, challenges in recognizing harm, and the unpredictable trajectory of AI development.
This paper contributes to AI safety research by providing a structured groundwork for understanding and addressing the potential large-scale negative societal impacts of general-purpose AI.
arXiv Detail & Related papers (2024-11-24T22:16:18Z) - Risk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems [2.3266896180922187]
We compile an extensive catalog of risk sources and risk management measures for general-purpose AI systems.
This work involves identifying technical, operational, and societal risks across model development, training, and deployment stages.
The catalog is released under a public domain license for ease of direct use by stakeholders in AI governance and standards.
arXiv Detail & Related papers (2024-10-30T21:32:56Z) - The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks From Artificial Intelligence [35.77247656798871]
The risks posed by Artificial Intelligence (AI) are of considerable concern to academics, auditors, policymakers, AI companies, and the public.
A lack of shared understanding of AI risks can impede our ability to comprehensively discuss, research, and react to them.
This paper addresses this gap by creating an AI Risk Repository to serve as a common frame of reference.
arXiv Detail & Related papers (2024-08-14T10:32:06Z) - AIR-Bench 2024: A Safety Benchmark Based on Risk Categories from Regulations and Policies [80.90138009539004]
AIR-Bench 2024 is the first AI safety benchmark aligned with emerging government regulations and company policies.
It decomposes 8 government regulations and 16 company policies into a four-tiered safety taxonomy with granular risk categories in the lowest tier.
We evaluate leading language models on AIR-Bench 2024, uncovering insights into their alignment with specified safety concerns.
arXiv Detail & Related papers (2024-07-11T21:16:48Z) - AI Risk Management Should Incorporate Both Safety and Security [185.68738503122114]
We argue that stakeholders in AI risk management should be aware of the nuances, synergies, and interplay between safety and security.
We introduce a unified reference framework to clarify the differences and interplay between AI safety and AI security.
arXiv Detail & Related papers (2024-05-29T21:00:47Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - Taxonomy to Regulation: A (Geo)Political Taxonomy for AI Risks and Regulatory Measures in the EU AI Act [0.0]
This work proposes a taxonomy focusing on (geo)political risks associated with AI.
It identifies 12 risks in total divided into four categories: (1) Geopolitical Pressures, (2) Malicious Usage, (3) Environmental, Social, and Ethical Risks, and (4) Privacy and Trust Violations.
arXiv Detail & Related papers (2024-04-17T15:32:56Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - RiskQ: Risk-sensitive Multi-Agent Reinforcement Learning Value Factorization [49.26510528455664]
We introduce the Risk-sensitive Individual-Global-Max (RIGM) principle as a generalization of the Individual-Global-Max (IGM) and Distributional IGM (DIGM) principles.
We show that RiskQ can obtain promising performance through extensive experiments.
arXiv Detail & Related papers (2023-11-03T07:18:36Z) - AI Risk Profiles: A Standards Proposal for Pre-Deployment AI Risk
Disclosures [0.8702432681310399]
We propose a risk profiling standard which can guide downstream decision-making.
The standard is built on our proposed taxonomy of AI risks, which reflects a high-level categorization of the wide variety of risks proposed in the literature.
We apply this methodology to a number of prominent AI systems using publicly available information.
arXiv Detail & Related papers (2023-09-22T20:45:15Z) - TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI [11.240642213359267]
Many exhaustive taxonomy are possible, and some are useful -- particularly if they reveal new risks or practical approaches to safety.
This paper explores a taxonomy based on accountability: whose actions lead to the risk, are the actors unified, and are they deliberate?
We also provide stories to illustrate how the various risk types could each play out, including risks arising from unanticipated interactions of many AI systems, and risks from deliberate misuse.
arXiv Detail & Related papers (2023-06-12T07:55:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.