Toward a Unified Security Framework for AI Agents: Trust, Risk, and Liability
- URL: http://arxiv.org/abs/2510.09620v1
- Date: Thu, 18 Sep 2025 01:55:03 GMT
- Title: Toward a Unified Security Framework for AI Agents: Trust, Risk, and Liability
- Authors: Jiayun Mo, Xin Kang, Tieyan Li, Zhongding Lei,
- Abstract summary: The Trust, Risk and Liability (TRL) framework proposed in this paper ties together the interdependent relationships of trust, risk, and liability to provide a systematic method of building and enhancing trust.<n>The implications of the TRL framework lie in its potential societal impacts, economic impacts, ethical impacts, and more.<n>It is expected to bring remarkable values to addressing potential challenges and promoting trustworthy, risk-free, and responsible usage of AI in 6G networks.
- Score: 2.8407281360114527
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The excitement brought by the development of AI agents came alongside arising problems. These concerns centered around users' trust issues towards AIs, the risks involved, and the difficulty of attributing responsibilities and liabilities. Current solutions only attempt to target each problem separately without acknowledging their inter-influential nature. The Trust, Risk and Liability (TRL) framework proposed in this paper, however, ties together the interdependent relationships of trust, risk, and liability to provide a systematic method of building and enhancing trust, analyzing and mitigating risks, and allocating and attributing liabilities. It can be applied to analyze any application scenarios of AI agents and suggest appropriate measures fitting to the context. The implications of the TRL framework lie in its potential societal impacts, economic impacts, ethical impacts, and more. It is expected to bring remarkable values to addressing potential challenges and promoting trustworthy, risk-free, and responsible usage of AI in 6G networks.
Related papers
- Unbounded Harms, Bounded Law: Liability in the Age of Borderless AI [0.0]
The rapid proliferation of artificial intelligence (AI) has exposed significant deficiencies in risk governance.<n>Core legal questions regarding liability allocation, responsibility attribution, and remedial effectiveness remain insufficiently theorized and institutionalized.<n>This paper examines compensation and liability frameworks from high-risk transnational domains.
arXiv Detail & Related papers (2026-01-19T01:44:14Z) - With Great Capabilities Come Great Responsibilities: Introducing the Agentic Risk & Capability Framework for Governing Agentic AI Systems [11.09031447875337]
Agentic Risk & Capability (ARC) Framework is a technical governance framework designed to help organizations identify, assess, and mitigate risks arising from agentic AI systems.<n>The framework's core contributions are:.<n>It develops a novel capability-centric perspective to analyze a wide range of agentic AI systems.<n>It distills three primary sources of risk intrinsic to agentic AI systems - components, design, and capabilities.<n>It establishes a clear nexus between each risk source, specific materialized risks, and corresponding technical controls.
arXiv Detail & Related papers (2025-12-22T03:51:34Z) - AURA: An Agent Autonomy Risk Assessment Framework [0.0]
AURA (Agent aUtonomy Risk Assessment) is a unified framework designed to detect, quantify, and mitigate risks arising from agentic AI.<n>AURA provides an interactive process to score, evaluate and mitigate the risks of running one or multiple AI Agents, synchronously or asynchronously.<n>AURA supports a responsible and transparent adoption of agentic AI and provides robust risk detection and mitigation while balancing computational resources.
arXiv Detail & Related papers (2025-10-17T15:30:29Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - ff4ERA: A new Fuzzy Framework for Ethical Risk Assessment in AI [0.24578723416255746]
We introduce ff4ERA, a fuzzy framework that integrates Fuzzy Logic, the Fuzzy Analytic Hierarchy Process (FAHP), and Certainty Factors (CF)<n>The framework offers a robust mathematical approach for collaborative Ethical Risk Assessment modeling and systematic, step-by-step analysis.<n>A case study confirms that ff4ERA yields context-sensitive, meaningful risk scores reflecting both expert input and sensor-based evidence.
arXiv Detail & Related papers (2025-07-28T14:41:36Z) - A Framework for the Assurance of AI-Enabled Systems [0.0]
This paper proposes a claims-based framework for risk management and assurance of AI systems.<n>The paper's contributions are a framework process for AI assurance, a set of relevant definitions, and a discussion of important considerations in AI assurance.
arXiv Detail & Related papers (2025-04-03T13:44:01Z) - Multi-Agent Risks from Advanced AI [90.74347101431474]
Multi-agent systems of advanced AI pose novel and under-explored risks.<n>We identify three key failure modes based on agents' incentives, as well as seven key risk factors.<n>We highlight several important instances of each risk, as well as promising directions to help mitigate them.
arXiv Detail & Related papers (2025-02-19T23:03:21Z) - Fully Autonomous AI Agents Should Not be Developed [50.61667544399082]
This paper argues that fully autonomous AI agents should not be developed.<n>In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels.<n>Our analysis reveals that risks to people increase with the autonomy of a system.
arXiv Detail & Related papers (2025-02-04T19:00:06Z) - Agentic AI: Autonomy, Accountability, and the Algorithmic Society [0.2209921757303168]
Agentic Artificial Intelligence (AI) can autonomously pursue long-term goals, make decisions, and execute complex, multi-turn.<n>This transition from advisory roles to proactive execution challenges established legal, economic, and creative frameworks.<n>We explore challenges in three interrelated domains: creativity and intellectual property, legal and ethical considerations, and competitive effects.
arXiv Detail & Related papers (2025-02-01T03:14:59Z) - Human services organizations and the responsible integration of AI: Considering ethics and contextualizing risk(s) [0.0]
Authors argue that ethical concerns about AI deployment vary significantly based on implementation context and specific use cases.<n>They propose a dimensional risk assessment approach that considers factors like data sensitivity, professional oversight requirements, and potential impact on client wellbeing.
arXiv Detail & Related papers (2025-01-20T19:38:21Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Towards Trustworthy AI: A Review of Ethical and Robust Large Language Models [1.7466076090043157]
Large Language Models (LLMs) could transform many fields, but their fast development creates significant challenges for oversight, ethical creation, and building user trust.
This comprehensive review looks at key trust issues in LLMs, such as unintended harms, lack of transparency, vulnerability to attacks, alignment with human values, and environmental impact.
To tackle these issues, we suggest combining ethical oversight, industry accountability, regulation, and public involvement.
arXiv Detail & Related papers (2024-06-01T14:47:58Z) - AI Risk Management Should Incorporate Both Safety and Security [185.68738503122114]
We argue that stakeholders in AI risk management should be aware of the nuances, synergies, and interplay between safety and security.
We introduce a unified reference framework to clarify the differences and interplay between AI safety and AI security.
arXiv Detail & Related papers (2024-05-29T21:00:47Z) - Risks of AI Scientists: Prioritizing Safeguarding Over Autonomy [65.77763092833348]
This perspective examines vulnerabilities in AI scientists, shedding light on potential risks associated with their misuse.<n>We take into account user intent, the specific scientific domain, and their potential impact on the external environment.<n>We propose a triadic framework involving human regulation, agent alignment, and an understanding of environmental feedback.
arXiv Detail & Related papers (2024-02-06T18:54:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.