With Great Capabilities Come Great Responsibilities: Introducing the Agentic Risk & Capability Framework for Governing Agentic AI Systems
- URL: http://arxiv.org/abs/2512.22211v1
- Date: Mon, 22 Dec 2025 03:51:34 GMT
- Title: With Great Capabilities Come Great Responsibilities: Introducing the Agentic Risk & Capability Framework for Governing Agentic AI Systems
- Authors: Shaun Khoo, Jessica Foo, Roy Ka-Wei Lee,
- Abstract summary: Agentic Risk & Capability (ARC) Framework is a technical governance framework designed to help organizations identify, assess, and mitigate risks arising from agentic AI systems.<n>The framework's core contributions are:.<n>It develops a novel capability-centric perspective to analyze a wide range of agentic AI systems.<n>It distills three primary sources of risk intrinsic to agentic AI systems - components, design, and capabilities.<n>It establishes a clear nexus between each risk source, specific materialized risks, and corresponding technical controls.
- Score: 11.09031447875337
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Agentic AI systems present both significant opportunities and novel risks due to their capacity for autonomous action, encompassing tasks such as code execution, internet interaction, and file modification. This poses considerable challenges for effective organizational governance, particularly in comprehensively identifying, assessing, and mitigating diverse and evolving risks. To tackle this, we introduce the Agentic Risk \& Capability (ARC) Framework, a technical governance framework designed to help organizations identify, assess, and mitigate risks arising from agentic AI systems. The framework's core contributions are: (1) it develops a novel capability-centric perspective to analyze a wide range of agentic AI systems; (2) it distills three primary sources of risk intrinsic to agentic AI systems - components, design, and capabilities; (3) it establishes a clear nexus between each risk source, specific materialized risks, and corresponding technical controls; and (4) it provides a structured and practical approach to help organizations implement the framework. This framework provides a robust and adaptable methodology for organizations to navigate the complexities of agentic AI, enabling rapid and effective innovation while ensuring the safe, secure, and responsible deployment of agentic AI systems. Our framework is open-sourced \href{https://govtech-responsibleai.github.io/agentic-risk-capability-framework/}{here}.
Related papers
- Securing Agentic AI Systems -- A Multilayer Security Framework [0.0]
Securing Agentic Artificial Intelligence (AI) systems requires addressing the complex cyber risks introduced by autonomous, decision-making, and adaptive behaviors.<n>Existing AI security frameworks do not adequately address these challenges or the unique nuances of agentic AI.<n>This research develops a lifecycle-aware security framework specifically designed for agentic AI systems.
arXiv Detail & Related papers (2025-12-19T20:22:25Z) - AURA: An Agent Autonomy Risk Assessment Framework [0.0]
AURA (Agent aUtonomy Risk Assessment) is a unified framework designed to detect, quantify, and mitigate risks arising from agentic AI.<n>AURA provides an interactive process to score, evaluate and mitigate the risks of running one or multiple AI Agents, synchronously or asynchronously.<n>AURA supports a responsible and transparent adoption of agentic AI and provides robust risk detection and mitigation while balancing computational resources.
arXiv Detail & Related papers (2025-10-17T15:30:29Z) - A cybersecurity AI agent selection and decision support framework [0.0]
This paper presents a novel, structured decision support framework that aligns AI agent architectures, reactive, cognitive, hybrid, and learning.<n>By integrating agent theory with industry guidelines, this framework provides a transparent and stepwise methodology for selecting and deploying AI solutions.
arXiv Detail & Related papers (2025-10-02T07:38:21Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - Web3 x AI Agents: Landscape, Integrations, and Foundational Challenges [49.69200207497795]
The convergence of Web3 technologies and AI agents represents a rapidly evolving frontier poised to reshape decentralized ecosystems.<n>This paper presents the first and most comprehensive analysis of the intersection between Web3 and AI agents, examining five critical dimensions: landscape, economics, governance, security, and trust mechanisms.
arXiv Detail & Related papers (2025-08-04T15:44:58Z) - When Autonomy Goes Rogue: Preparing for Risks of Multi-Agent Collusion in Social Systems [78.04679174291329]
We introduce a proof-of-concept to simulate the risks of malicious multi-agent systems (MAS)<n>We apply this framework to two high-risk fields: misinformation spread and e-commerce fraud.<n>Our findings show that decentralized systems are more effective at carrying out malicious actions than centralized ones.
arXiv Detail & Related papers (2025-07-19T15:17:30Z) - TRiSM for Agentic AI: A Review of Trust, Risk, and Security Management in LLM-based Agentic Multi-Agent Systems [8.683314804719506]
This review presents a structured analysis of Trust, Risk, and Security Management (TRiSM) in the context of Agentic Multi-Agent Systems (AMAS)<n>We begin by examining the conceptual foundations of Agentic AI and highlight its architectural distinctions from traditional AI agents.<n>We then adapt and extend the AI TRiSM framework for Agentic AI, structured around key pillars: textit Explainability, ModelOps, Security, Privacy and textittheir lifecycle governance<n>A risk taxonomy is proposed to capture the unique threats and vulnerabilities of Agentic AI, ranging from coordination failures to
arXiv Detail & Related papers (2025-06-04T16:26:11Z) - Multi-Agent Risks from Advanced AI [90.74347101431474]
Multi-agent systems of advanced AI pose novel and under-explored risks.<n>We identify three key failure modes based on agents' incentives, as well as seven key risk factors.<n>We highlight several important instances of each risk, as well as promising directions to help mitigate them.
arXiv Detail & Related papers (2025-02-19T23:03:21Z) - A Frontier AI Risk Management Framework: Bridging the Gap Between Current AI Practices and Established Risk Management [0.0]
The recent development of powerful AI systems has highlighted the need for robust risk management frameworks.<n>This paper presents a comprehensive risk management framework for the development of frontier AI.
arXiv Detail & Related papers (2025-02-10T16:47:00Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.<n>Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.<n>However, the deployment of these agents in physical environments presents significant safety challenges.<n>This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.