Binding Agent ID: Unleashing the Power of AI Agents with accountability and credibility
- URL: http://arxiv.org/abs/2512.17538v1
- Date: Fri, 19 Dec 2025 13:01:54 GMT
- Title: Binding Agent ID: Unleashing the Power of AI Agents with accountability and credibility
- Authors: Zibin Lin, Shengli Zhang, Guofu Liao, Dacheng Tao, Taotao Wang,
- Abstract summary: BAID (Binding Agent ID) is a comprehensive identity infrastructure establishing verifiable user-code binding.<n>We implement and evaluate a complete prototype system, demonstrating the practical feasibility of blockchain-based identity management and zkVM-based authentication protocol.
- Score: 46.323590135279126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous AI agents lack traceable accountability mechanisms, creating a fundamental dilemma where systems must either operate as ``downgraded tools'' or risk real-world abuse. This vulnerability stems from the limitations of traditional key-based authentication, which guarantees neither the operator's physical identity nor the agent's code integrity. To bridge this gap, we propose BAID (Binding Agent ID), a comprehensive identity infrastructure establishing verifiable user-code binding. BAID integrates three orthogonal mechanisms: local binding via biometric authentication, decentralized on-chain identity management, and a novel zkVM-based Code-Level Authentication protocol. By leveraging recursive proofs to treat the program binary as the identity, this protocol provides cryptographic guarantees for operator identity, agent configuration integrity, and complete execution provenance, thereby effectively preventing unauthorized operation and code substitution. We implement and evaluate a complete prototype system, demonstrating the practical feasibility of blockchain-based identity management and zkVM-based authentication protocol.
Related papers
- Interoperable Architecture for Digital Identity Delegation for AI Agents with Blockchain Integration [0.0]
We introduce a unified framework that enables bounded, auditable, and least-privilege delegation across heterogeneous identity ecosystems.<n>The framework includes four key elements: Delegation Grants (DGs), first-class authorization artefacts that encode revocable transfers of authority with enforced scope reduction.<n>It also includes a layered reference architecture that separates trust anchoring, credential and proof validation, policy evaluation, and protocol mediation via a Trust Gateway.
arXiv Detail & Related papers (2026-01-21T13:29:23Z) - Towards Verifiably Safe Tool Use for LLM Agents [53.55621104327779]
Large language model (LLM)-based AI agents extend capabilities by enabling access to tools such as data sources, APIs, search engines, code sandboxes, and even other agents.<n>LLMs may invoke unintended tool interactions and introduce risks, such as leaking sensitive data or overwriting critical records.<n>Current approaches to mitigate these risks, such as model-based safeguards, enhance agents' reliability but cannot guarantee system safety.
arXiv Detail & Related papers (2026-01-12T21:31:38Z) - Achieving Flexible and Secure Authentication with Strong Privacy in Decentralized Networks [13.209703999398805]
IRAC is a flexible credential model that unifies credentials from heterogeneous issuers.<n>We design a secure decentralized revocation mechanism where holders prove non-revocation by demonstrating their credential's revocation within a gap in the issuer's sorted list.
arXiv Detail & Related papers (2025-12-23T10:49:05Z) - Secure Autonomous Agent Payments: Verifying Authenticity and Intent in a Trustless Environment [0.0]
Artificial intelligence (AI) agents are increasingly capable of initiating financial transactions on behalf of users or other agents.<n>Traditional payment systems assume human authorization, but autonomous, agent-led payments remove that safeguard.<n>This paper presents a blockchain-based framework that cryptographically authenticates and verifies the intent of every AI-initiated transaction.
arXiv Detail & Related papers (2025-11-08T19:53:51Z) - VeriGuard: Enhancing LLM Agent Safety via Verified Code Generation [40.594947933580464]
The deployment of autonomous AI agents in sensitive domains, such as healthcare, introduces critical risks to safety, security, and privacy.<n>We introduce VeriGuard, a novel framework that provides formal safety guarantees for LLM-based agents.
arXiv Detail & Related papers (2025-10-03T04:11:43Z) - AI Agents with Decentralized Identifiers and Verifiable Credentials [32.505127447635864]
This article presents a prototypical multi-agent system, where each agent is endowed with a self-sovereign digital identity.<n>It combines a unique and ledger-anchored Decentralized Identifier (DID) of an agent with a set of third-party issued Verifiable Credentials (VCs)<n>It enables agents at the start of a dialog to prove ownership of their self-controlled DIDs for authentication purposes and to establish various cross-domain trust relationships.
arXiv Detail & Related papers (2025-10-01T08:10:37Z) - LLM Agents Should Employ Security Principles [60.03651084139836]
This paper argues that the well-established design principles in information security should be employed when deploying Large Language Model (LLM) agents at scale.<n>We introduce AgentSandbox, a conceptual framework embedding these security principles to provide safeguards throughout an agent's life-cycle.
arXiv Detail & Related papers (2025-05-29T21:39:08Z) - A Novel Zero-Trust Identity Framework for Agentic AI: Decentralized Authentication and Fine-Grained Access Control [7.228060525494563]
This paper posits the imperative for a novel Agentic AI IAM framework.<n>We propose a comprehensive framework built upon rich, verifiable Agent Identities (IDs)<n>We also explore how Zero-Knowledge Proofs (ZKPs) enable privacy-preserving attribute disclosure and verifiable policy compliance.
arXiv Detail & Related papers (2025-05-25T20:21:55Z) - LOKA Protocol: A Decentralized Framework for Trustworthy and Ethical AI Agent Ecosystems [0.0]
We present the novel LOKA Protocol (Layered Orchestration for Knowledgeful Agents), a unified, systems-level architecture for building ethically governed, interoperable AI agent ecosystems.<n>LOKA introduces a proposed Universal Agent Identity Layer (UAIL) for decentralized, verifiable identity; intent-centric communication protocols for semantic coordination across diverse agents; and a Decentralized Ethical Consensus Protocol (DECP) that could enable agents to make context-aware decisions grounded in shared ethical baselines.
arXiv Detail & Related papers (2025-04-15T06:51:35Z) - CryptoFormalEval: Integrating LLMs and Formal Verification for Automated Cryptographic Protocol Vulnerability Detection [41.94295877935867]
We introduce a benchmark to assess the ability of Large Language Models to autonomously identify vulnerabilities in new cryptographic protocols.
We created a dataset of novel, flawed, communication protocols and designed a method to automatically verify the vulnerabilities found by the AI agents.
arXiv Detail & Related papers (2024-11-20T14:16:55Z) - A Survey and Comparative Analysis of Security Properties of CAN Authentication Protocols [92.81385447582882]
The Controller Area Network (CAN) bus leaves in-vehicle communications inherently non-secure.
This paper reviews and compares the 15 most prominent authentication protocols for the CAN bus.
We evaluate protocols based on essential operational criteria that contribute to ease of implementation.
arXiv Detail & Related papers (2024-01-19T14:52:04Z) - Combining Decentralized IDentifiers with Proof of Membership to Enable Trust in IoT Networks [44.99833362998488]
The paper proposes and discusses an alternative (mutual) authentication process for IoT nodes under the same administration domain.
The main idea is to combine the Decentralized IDentifier (DID)-based verification of private key ownership with the verification of a proof that the DID belongs to an evolving trusted set.
arXiv Detail & Related papers (2023-10-12T09:33:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.