Authenticated Delegation and Authorized AI Agents
- URL: http://arxiv.org/abs/2501.09674v1
- Date: Thu, 16 Jan 2025 17:11:21 GMT
- Title: Authenticated Delegation and Authorized AI Agents
- Authors: Tobin South, Samuele Marro, Thomas Hardjono, Robert Mahari, Cedric Deslandes Whitney, Dazza Greenwood, Alan Chan, Alex Pentland,
- Abstract summary: We introduce a novel framework for authenticated, authorized, and auditable delegation of authority to AI agents.<n>We propose a framework for translating flexible, natural language permissions into auditable access control configurations.
- Score: 4.679384754914167
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The rapid deployment of autonomous AI agents creates urgent challenges around authorization, accountability, and access control in digital spaces. New standards are needed to know whom AI agents act on behalf of and guide their use appropriately, protecting online spaces while unlocking the value of task delegation to autonomous agents. We introduce a novel framework for authenticated, authorized, and auditable delegation of authority to AI agents, where human users can securely delegate and restrict the permissions and scope of agents while maintaining clear chains of accountability. This framework builds on existing identification and access management protocols, extending OAuth 2.0 and OpenID Connect with agent-specific credentials and metadata, maintaining compatibility with established authentication and web infrastructure. Further, we propose a framework for translating flexible, natural language permissions into auditable access control configurations, enabling robust scoping of AI agent capabilities across diverse interaction modalities. Taken together, this practical approach facilitates immediate deployment of AI agents while addressing key security and accountability concerns, working toward ensuring agentic AI systems perform only appropriate actions and providing a tool for digital service providers to enable AI agent interactions without risking harm from scalable interaction.
Related papers
- SAGA: A Security Architecture for Governing AI Agentic Systems [13.106925341037046]
Large Language Model (LLM)-based agents increasingly interact, collaborate, and delegate tasks to one another autonomously with minimal human interaction.
Industry guidelines for agentic system governance emphasize the need for users to maintain comprehensive control over their agents.
We propose SAGA, a Security Architecture for Governing Agentic systems, that offers user oversight over their agents' lifecycle.
arXiv Detail & Related papers (2025-04-27T23:10:00Z) - Trusted Identities for AI Agents: Leveraging Telco-Hosted eSIM Infrastructure [0.0]
We propose a conceptual architecture that leverages telecom-grade eSIM infrastructure.
Rather than embedding SIM credentials in hardware devices, we envision a model where telcos host secure, certified hardware modules.
This paper is intended as a conceptual framework to open discussion around standardization, security architecture, and the role of telecom infrastructure in the evolving agent economy.
arXiv Detail & Related papers (2025-04-17T15:36:26Z) - Towards Agentic AI Networking in 6G: A Generative Foundation Model-as-Agent Approach [35.05793485239977]
We propose AgentNet, a novel framework for supporting interaction, collaborative learning, and knowledge transfer among AI agents.
We consider two application scenarios, digital-twin-based industrial automation and metaverse-based infotainment system, to describe how to apply AgentNet.
arXiv Detail & Related papers (2025-03-20T00:48:44Z) - AgentDAM: Privacy Leakage Evaluation for Autonomous Web Agents [75.85554113398626]
We develop a benchmark called AgentDAM to evaluate how well existing and future AI agents can limit processing of potentially private information.
Our benchmark simulates realistic web interaction scenarios and is adaptable to all existing web navigation agents.
arXiv Detail & Related papers (2025-03-12T19:30:31Z) - AgentOrca: A Dual-System Framework to Evaluate Language Agents on Operational Routine and Constraint Adherence [54.317522790545304]
We present AgentOrca, a dual-system framework for evaluating language agents' compliance with operational constraints and routines.
Our framework encodes action constraints and routines through both natural language prompts for agents and corresponding executable code serving as ground truth for automated verification.
Our findings reveal notable performance gaps among state-of-the-art models, with large reasoning models like o1 demonstrating superior compliance while others show significantly lower performance.
arXiv Detail & Related papers (2025-03-11T17:53:02Z) - Infrastructure for AI Agents [3.373674048991415]
AI systems can plan and execute interactions in open-ended environments, such as making phone calls or buying online goods.
Current tools are largely insufficient because they are not designed to shape how agents interact with existing institutions.
We propose the concept of agent infrastructure: technical systems and shared protocols external to agents.
arXiv Detail & Related papers (2025-01-17T10:58:12Z) - Governing AI Agents [0.2913760942403036]
Companies that pioneered the development of generative AI tools are now building AI agents.<n>This Article uses agency law and theory to identify and characterize problems arising from AI agents.<n>It argues that new technical and legal infrastructure is needed to support governance principles of inclusivity, visibility, and liability.
arXiv Detail & Related papers (2025-01-14T07:55:18Z) - Agent-as-a-Judge: Evaluate Agents with Agents [61.33974108405561]
We introduce the Agent-as-a-Judge framework, wherein agentic systems are used to evaluate agentic systems.
This is an organic extension of the LLM-as-a-Judge framework, incorporating agentic features that enable intermediate feedback for the entire task-solving process.
We present DevAI, a new benchmark of 55 realistic automated AI development tasks.
arXiv Detail & Related papers (2024-10-14T17:57:02Z) - Internet of Agents: Weaving a Web of Heterogeneous Agents for Collaborative Intelligence [79.5316642687565]
Existing multi-agent frameworks often struggle with integrating diverse capable third-party agents.
We propose the Internet of Agents (IoA), a novel framework that addresses these limitations.
IoA introduces an agent integration protocol, an instant-messaging-like architecture design, and dynamic mechanisms for agent teaming and conversation flow control.
arXiv Detail & Related papers (2024-07-09T17:33:24Z) - Security of AI Agents [5.468745160706382]
We identify and describe potential vulnerabilities in AI agents in detail from a system security perspective.<n>We introduce defense mechanisms corresponding to each vulnerability with design and experiments to evaluate their viability.<n>This paper contextualizes the security issues in the current development of AI agents and delineates methods to make AI agents safer and more reliable.
arXiv Detail & Related papers (2024-06-12T23:16:45Z) - CACA Agent: Capability Collaboration based AI Agent [18.84686313298908]
We propose CACA Agent (Capability Collaboration based AI Agent) using an open architecture inspired by service computing.
CACA Agent integrates a set of collaborative capabilities to implement AI Agents, not only reducing the dependence on a single LLM.
We present a demo to illustrate the operation and the application scenario extension of CACA Agent.
arXiv Detail & Related papers (2024-03-22T11:42:47Z) - AgentScope: A Flexible yet Robust Multi-Agent Platform [66.64116117163755]
AgentScope is a developer-centric multi-agent platform with message exchange as its core communication mechanism.
The abundant syntactic tools, built-in agents and service functions, user-friendly interfaces for application demonstration and utility monitor, zero-code programming workstation, and automatic prompt tuning mechanism significantly lower the barriers to both development and deployment.
arXiv Detail & Related papers (2024-02-21T04:11:28Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.