Dimensional Characterization and Pathway Modeling for Catastrophic AI Risks
- URL: http://arxiv.org/abs/2508.06411v1
- Date: Fri, 08 Aug 2025 15:56:05 GMT
- Title: Dimensional Characterization and Pathway Modeling for Catastrophic AI Risks
- Authors: Ze Shen Chin,
- Abstract summary: This paper examines six commonly discussed AI catastrophic risks: CBRN, cyber offense, sudden loss of control, gradual loss of control, environmental risk, and geopolitical risk.<n>We characterize these risks across seven key dimensions, namely intent, competency, entity, polarity, linearity, reach, and order.<n>We conduct risk pathway modeling by mapping step-by-step progressions from the initial hazard to the resulting harms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although discourse around the risks of Artificial Intelligence (AI) has grown, it often lacks a comprehensive, multidimensional framework, and concrete causal pathways mapping hazard to harm. This paper aims to bridge this gap by examining six commonly discussed AI catastrophic risks: CBRN, cyber offense, sudden loss of control, gradual loss of control, environmental risk, and geopolitical risk. First, we characterize these risks across seven key dimensions, namely intent, competency, entity, polarity, linearity, reach, and order. Next, we conduct risk pathway modeling by mapping step-by-step progressions from the initial hazard to the resulting harms. The dimensional approach supports systematic risk identification and generalizable mitigation strategies, while risk pathway models help identify scenario-specific interventions. Together, these methods offer a more structured and actionable foundation for managing catastrophic AI risks across the value chain.
Related papers
- Frontier AI Risk Management Framework in Practice: A Risk Analysis Technical Report v1.5 [61.787178868669265]
This technical report presents an updated and granular assessment of five critical dimensions: cyber offense, persuasion and manipulation, strategic deception, uncontrolled AI R&D, and self-replication.<n>This work reflects our current understanding of AI frontier risks and urges collective action to mitigate these challenges.
arXiv Detail & Related papers (2026-02-16T04:30:06Z) - Constrained Language Model Policy Optimization via Risk-aware Stepwise Alignment [49.2305683068875]
We propose Risk-aware Stepwise Alignment (RSA), a novel alignment method that incorporates risk awareness into the policy optimization process.<n> RSA mitigates risks induced by excessive model shift away from a reference policy, and it explicitly suppresses low-probability yet high-impact harmful behaviors.<n> Experimental results demonstrate that our method achieves high levels of helpfulness while ensuring strong safety.
arXiv Detail & Related papers (2025-12-30T14:38:02Z) - Toward Quantitative Modeling of Cybersecurity Risks Due to AI Misuse [50.87630846876635]
We develop nine detailed cyber risk models.<n>Each model decomposes attacks into steps using the MITRE ATT&CK framework.<n>Individual estimates are aggregated through Monte Carlo simulation.
arXiv Detail & Related papers (2025-12-09T17:54:17Z) - A Methodology for Quantitative AI Risk Modeling [32.594929429306774]
This paper advances the risk modeling component of AI risk management by introducing a methodology that integrates scenario building with quantitative risk estimation.<n>Our methodology is designed to be applicable to key systemic AI risks, including cyber offense, biological weapon development, harmful manipulation, and loss-of-control.
arXiv Detail & Related papers (2025-12-09T17:34:59Z) - The Role of Risk Modeling in Advanced AI Risk Management [33.357295564462284]
Rapidly advancing artificial intelligence (AI) systems introduce novel, uncertain, and potentially catastrophic risks.<n>Managing these risks requires a mature risk-management infrastructure whose cornerstone is rigorous risk modeling.<n>We argue that advanced-AI governance should adopt a similar dual approach and that verifiable, provably-safe AI architectures are urgently needed.
arXiv Detail & Related papers (2025-12-09T15:37:33Z) - An Artificial Intelligence Value at Risk Approach: Metrics and Models [0.0]
The state of the art of artificial intelligence risk management seems to be highly immature due to upcoming AI regulations.<n>The purpose of this paper is to orientate AI stakeholders about the depths of AI risk management.
arXiv Detail & Related papers (2025-09-22T20:27:29Z) - Frontier AI Risk Management Framework in Practice: A Risk Analysis Technical Report [51.17413460785022]
This report presents a comprehensive assessment of their frontier risks.<n>We identify critical risks in seven areas: cyber offense, biological and chemical risks, persuasion and manipulation, uncontrolled autonomous AI R&D, strategic deception and scheming, self-replication, and collusion.
arXiv Detail & Related papers (2025-07-22T12:44:38Z) - Adapting Probabilistic Risk Assessment for AI [0.0]
General-purpose artificial intelligence (AI) systems present an urgent risk management challenge.<n>Current methods often rely on selective testing and undocumented assumptions about risk priorities.<n>This paper introduces the probabilistic risk assessment (PRA) for AI framework.
arXiv Detail & Related papers (2025-04-25T17:59:14Z) - An Approach to Technical AGI Safety and Security [72.83728459135101]
We develop an approach to address the risk of harms consequential enough to significantly harm humanity.<n>We focus on technical approaches to misuse and misalignment.<n>We briefly outline how these ingredients could be combined to produce safety cases for AGI systems.
arXiv Detail & Related papers (2025-04-02T15:59:31Z) - Multi-Agent Risks from Advanced AI [90.74347101431474]
Multi-agent systems of advanced AI pose novel and under-explored risks.<n>We identify three key failure modes based on agents' incentives, as well as seven key risk factors.<n>We highlight several important instances of each risk, as well as promising directions to help mitigate them.
arXiv Detail & Related papers (2025-02-19T23:03:21Z) - A Formal Framework for Assessing and Mitigating Emergent Security Risks in Generative AI Models: Bridging Theory and Dynamic Risk Mitigation [0.3413711585591077]
As generative AI systems, including large language models (LLMs) and diffusion models, advance rapidly, their growing adoption has led to new and complex security risks.
This paper introduces a novel formal framework for categorizing and mitigating these emergent security risks.
We identify previously under-explored risks, including latent space exploitation, multi-modal cross-attack vectors, and feedback-loop-induced model degradation.
arXiv Detail & Related papers (2024-10-15T02:51:32Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.<n>Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.<n>However, the deployment of these agents in physical environments presents significant safety challenges.<n>This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Two Types of AI Existential Risk: Decisive and Accumulative [3.5051464966389116]
This paper contrasts the conventional "decisive AI x-risk hypothesis" with an "accumulative AI x-risk hypothesis"<n>It argues that the accumulative view can reconcile seemingly incompatible perspectives on AI risks.
arXiv Detail & Related papers (2024-01-15T17:06:02Z) - Capsa: A Unified Framework for Quantifying Risk in Deep Neural Networks [142.67349734180445]
Existing algorithms that provide risk-awareness to deep neural networks are complex and ad-hoc.
Here we present capsa, a framework for extending models with risk-awareness.
arXiv Detail & Related papers (2023-08-01T02:07:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.