Policy Cards: Machine-Readable Runtime Governance for Autonomous AI Agents
- URL: http://arxiv.org/abs/2510.24383v1
- Date: Tue, 28 Oct 2025 12:59:55 GMT
- Title: Policy Cards: Machine-Readable Runtime Governance for Autonomous AI Agents
- Authors: Juraj Mavračić,
- Abstract summary: Policy Cards are a machine-readable, deployment-layer standard for expressing operational, regulatory, and ethical constraints for AI agents.<n>Each Policy Card can be validated automatically, version-controlled, and linked to runtime enforcement or continuous-audit pipelines.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Policy Cards are introduced as a machine-readable, deployment-layer standard for expressing operational, regulatory, and ethical constraints for AI agents. The Policy Card sits with the agent and enables it to follow required constraints at runtime. It tells the agent what it must and must not do. As such, it becomes an integral part of the deployed agent. Policy Cards extend existing transparency artifacts such as Model, Data, and System Cards by defining a normative layer that encodes allow/deny rules, obligations, evidentiary requirements, and crosswalk mappings to assurance frameworks including NIST AI RMF, ISO/IEC 42001, and the EU AI Act. Each Policy Card can be validated automatically, version-controlled, and linked to runtime enforcement or continuous-audit pipelines. The framework enables verifiable compliance for autonomous agents, forming a foundation for distributed assurance in multi-agent ecosystems. Policy Cards provide a practical mechanism for integrating high-level governance with hands-on engineering practice and enabling accountable autonomy at scale.
Related papers
- Policy Compiler for Secure Agentic Systems [20.346157626726725]
We present PCAS, a Policy Compiler for Agentic Systems that provides deterministic policy enforcement.<n>We evaluate PCAS on three case studies: information flow policies for prompt injection defense, approval in a multi-agent pharmacovigilance system, and organizational policies for customer service.
arXiv Detail & Related papers (2026-02-18T18:57:12Z) - Towards Verifiably Safe Tool Use for LLM Agents [53.55621104327779]
Large language model (LLM)-based AI agents extend capabilities by enabling access to tools such as data sources, APIs, search engines, code sandboxes, and even other agents.<n>LLMs may invoke unintended tool interactions and introduce risks, such as leaking sensitive data or overwriting critical records.<n>Current approaches to mitigate these risks, such as model-based safeguards, enhance agents' reliability but cannot guarantee system safety.
arXiv Detail & Related papers (2026-01-12T21:31:38Z) - AI Deployment Authorisation: A Global Standard for Machine-Readable Governance of High-Risk Artificial Intelligence [0.0]
This paper introduces the AI Deployment Authorisation Score (ADAS), a machine-readable regulatory framework that evaluates AI systems.<n>ADAS produces a cryptographically verifiable deployment certificate that regulators, insurers, and infrastructure operators can consume as a license to operate.
arXiv Detail & Related papers (2026-01-11T18:14:20Z) - A Technical Policy Blueprint for Trustworthy Decentralized AI [29.298499284846844]
We propose a Technical Policy Blueprint that encodes governance requirements as policy-as-code objects.<n>We are proposing a Technical Policy Blueprint that separates asset policy verification from asset policy enforcement.
arXiv Detail & Related papers (2025-12-07T21:27:48Z) - Are Agents Just Automata? On the Formal Equivalence Between Agentic AI and the Chomsky Hierarchy [4.245979127318219]
This paper establishes a formal equivalence between the architectural classes of modern agentic AI systems and the abstract machines of the hierarchy.<n>We demonstrate that simple reflex agents are equivalent to Finite Automata, hierarchical task-decomposition agents are equivalent to Pushdown Automata, and agents employing readable/writable memory for reflection are equivalent to TMs.
arXiv Detail & Related papers (2025-10-27T16:22:02Z) - Analyzing and Internalizing Complex Policy Documents for LLM Agents [53.14898416858099]
Large Language Model (LLM)-based agentic systems rely on in-context policy documents encoding diverse business rules.<n>This motivates developing internalization methods that embed policy documents into model priors while preserving performance.<n>We introduce CC-Gen, an agentic benchmark generator with Controllable Complexity across four levels.
arXiv Detail & Related papers (2025-10-13T16:30:07Z) - The AI Agent Code of Conduct: Automated Guardrail Policy-as-Prompt Synthesis [0.19336815376402716]
We introduce a novel framework that automates the translation of unstructured design documents into verifiable, real-time guardrails.<n>"Policy as Prompt" uses Large Language Models (LLMs) to interpret and enforce natural language policies.<n>We validate our approach across diverse applications, demonstrating a scalable and auditable pipeline.
arXiv Detail & Related papers (2025-09-28T17:36:52Z) - Safe and Certifiable AI Systems: Concepts, Challenges, and Lessons Learned [45.44933002008943]
This white paper presents the T"UV AUSTRIA Trusted AI framework.<n>It is an end-to-end audit catalog and methodology for assessing and certifying machine learning systems.<n>Building on three pillars - Secure Software Development, Functional Requirements, and Ethics & Data Privacy - it translates the high-level obligations of the EU AI Act into specific, testable criteria.
arXiv Detail & Related papers (2025-09-08T17:52:08Z) - ARPaCCino: An Agentic-RAG for Policy as Code Compliance [0.18472148461613155]
ARPaCCino is an agentic system that combines Large Language Models, Retrieval-Augmented-Generation, and tool-based validation.<n>It generates formal Rego rules, assesses IaC compliance, and iteratively refines the IaC configurations to ensure conformance.<n>Our results highlight the potential of agentic RAG architectures to enhance the automation, reliability, and accessibility of PaC.
arXiv Detail & Related papers (2025-07-11T12:36:33Z) - LLM Agents Should Employ Security Principles [60.03651084139836]
This paper argues that the well-established design principles in information security should be employed when deploying Large Language Model (LLM) agents at scale.<n>We introduce AgentSandbox, a conceptual framework embedding these security principles to provide safeguards throughout an agent's life-cycle.
arXiv Detail & Related papers (2025-05-29T21:39:08Z) - A Novel Zero-Trust Identity Framework for Agentic AI: Decentralized Authentication and Fine-Grained Access Control [7.228060525494563]
This paper posits the imperative for a novel Agentic AI IAM framework.<n>We propose a comprehensive framework built upon rich, verifiable Agent Identities (IDs)<n>We also explore how Zero-Knowledge Proofs (ZKPs) enable privacy-preserving attribute disclosure and verifiable policy compliance.
arXiv Detail & Related papers (2025-05-25T20:21:55Z) - Architecture for Simulating Behavior Mode Changes in Norm-Aware Autonomous Agents [0.0]
This paper presents an architecture for simulating the actions of a norm-aware intelligent agent.<n> Updating an agent's behavior mode from a norm-abiding to a riskier one may be relevant when the agent is involved in time-sensitive rescue operations.
arXiv Detail & Related papers (2025-02-13T11:49:02Z) - Conformal Policy Learning for Sensorimotor Control Under Distribution
Shifts [61.929388479847525]
This paper focuses on the problem of detecting and reacting to changes in the distribution of a sensorimotor controller's observables.
The key idea is the design of switching policies that can take conformal quantiles as input.
We show how to design such policies by using conformal quantiles to switch between base policies with different characteristics.
arXiv Detail & Related papers (2023-11-02T17:59:30Z) - A General Framework for Verification and Control of Dynamical Models via Certificate Synthesis [54.959571890098786]
We provide a framework to encode system specifications and define corresponding certificates.
We present an automated approach to formally synthesise controllers and certificates.
Our approach contributes to the broad field of safe learning for control, exploiting the flexibility of neural networks.
arXiv Detail & Related papers (2023-09-12T09:37:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.