A Technical Policy Blueprint for Trustworthy Decentralized AI
- URL: http://arxiv.org/abs/2512.11878v1
- Date: Sun, 07 Dec 2025 21:27:48 GMT
- Title: A Technical Policy Blueprint for Trustworthy Decentralized AI
- Authors: Hasan Kassem, Sergen Cansiz, Brandon Edwards, Patrick Foley, Inken Hagestedt, Taeho Jung, Prakash Moorthy, Michael O'Connor, Bruno Rodrigues, Holger Roth, Micah Sheller, Dimitris Stripelis, Marc Vesin, Renato Umeton, Mic Bowman, Alexandros Karargyris,
- Abstract summary: We propose a Technical Policy Blueprint that encodes governance requirements as policy-as-code objects.<n>We are proposing a Technical Policy Blueprint that separates asset policy verification from asset policy enforcement.
- Score: 29.298499284846844
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decentralized AI systems, such as federated learning, can play a critical role in further unlocking AI asset marketplaces (e.g., healthcare data marketplaces) thanks to increased asset privacy protection. Unlocking this big potential necessitates governance mechanisms that are transparent, scalable, and verifiable. However current governance approaches rely on bespoke, infrastructure-specific policies that hinder asset interoperability and trust among systems. We are proposing a Technical Policy Blueprint that encodes governance requirements as policy-as-code objects and separates asset policy verification from asset policy enforcement. In this architecture the Policy Engine verifies evidence (e.g., identities, signatures, payments, trusted-hardware attestations) and issues capability packages. Asset Guardians (e.g. data guardians, model guardians, computation guardians, etc.) enforce access or execution solely based on these capability packages. This core concept of decoupling policy processing from capabilities enables governance to evolve without reconfiguring AI infrastructure, thus creating an approach that is transparent, auditable, and resilient to change.
Related papers
- AI Deployment Authorisation: A Global Standard for Machine-Readable Governance of High-Risk Artificial Intelligence [0.0]
This paper introduces the AI Deployment Authorisation Score (ADAS), a machine-readable regulatory framework that evaluates AI systems.<n>ADAS produces a cryptographically verifiable deployment certificate that regulators, insurers, and infrastructure operators can consume as a license to operate.
arXiv Detail & Related papers (2026-01-11T18:14:20Z) - Policy Cards: Machine-Readable Runtime Governance for Autonomous AI Agents [0.0]
Policy Cards are a machine-readable, deployment-layer standard for expressing operational, regulatory, and ethical constraints for AI agents.<n>Each Policy Card can be validated automatically, version-controlled, and linked to runtime enforcement or continuous-audit pipelines.
arXiv Detail & Related papers (2025-10-28T12:59:55Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - AI-Governed Agent Architecture for Web-Trustworthy Tokenization of Alternative Assets [3.0801485631077457]
Alternative Assets tokenization is transforming non-traditional financial instruments are represented and traded on the web.<n>This paper proposes an AI-governed agent architecture that integrates intelligent agents with blockchain to achieve web-trustworthy tokenization of alternative assets.
arXiv Detail & Related papers (2025-06-30T11:28:51Z) - Zero-Trust Foundation Models: A New Paradigm for Secure and Collaborative Artificial Intelligence for Internet of Things [61.43014629640404]
Zero-Trust Foundation Models (ZTFMs) embed zero-trust security principles into the lifecycle of foundation models (FMs) for Internet of Things (IoT) systems.<n>ZTFMs can enable secure, privacy-preserving AI across distributed, heterogeneous, and potentially adversarial IoT environments.
arXiv Detail & Related papers (2025-05-26T06:44:31Z) - A Novel Zero-Trust Identity Framework for Agentic AI: Decentralized Authentication and Fine-Grained Access Control [7.228060525494563]
This paper posits the imperative for a novel Agentic AI IAM framework.<n>We propose a comprehensive framework built upon rich, verifiable Agent Identities (IDs)<n>We also explore how Zero-Knowledge Proofs (ZKPs) enable privacy-preserving attribute disclosure and verifiable policy compliance.
arXiv Detail & Related papers (2025-05-25T20:21:55Z) - Artificial Intelligence in Government: Why People Feel They Lose Control [44.99833362998488]
The use of Artificial Intelligence in public administration is expanding rapidly.<n>While AI promises greater efficiency and responsiveness, its integration into government functions raises concerns about fairness, transparency, and accountability.<n>This article applies principal-agent theory to AI adoption as a special case of delegation.
arXiv Detail & Related papers (2025-05-02T07:46:41Z) - Balancing Confidentiality and Transparency for Blockchain-based Process-Aware Information Systems [43.253676241213626]
We propose an architecture for blockchain-based PAISs to preserve confidentiality and transparency.<n>Smart contracts enact, enforce and store public interactions, while attribute-based encryption techniques are adopted to specify access grants to confidential information.<n>We assess the security of our solution through a systematic threat model analysis and evaluate its practical feasibility.
arXiv Detail & Related papers (2024-12-07T20:18:36Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Vulnerabilities that arise from poor governance in Distributed Ledger Technologies [0.0]
Distributed Ledger Technologies (DLTs) promise decentralization, transparency, and security, yet the reality often falls short due to fundamental governance flaws.<n>This paper surveys the state of DLT governance, identifies critical vulnerabilities, and highlights the absence of universally accepted best practices for good governance.
arXiv Detail & Related papers (2024-09-24T10:19:00Z) - Computing Power and the Governance of Artificial Intelligence [51.967584623262674]
Governments and companies have started to leverage compute as a means to govern AI.
compute-based policies and technologies have the potential to assist in these areas, but there is significant variation in their readiness for implementation.
naive or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power.
arXiv Detail & Related papers (2024-02-13T21:10:21Z) - A Survey and Comparative Analysis of Security Properties of CAN Authentication Protocols [92.81385447582882]
The Controller Area Network (CAN) bus leaves in-vehicle communications inherently non-secure.
This paper reviews and compares the 15 most prominent authentication protocols for the CAN bus.
We evaluate protocols based on essential operational criteria that contribute to ease of implementation.
arXiv Detail & Related papers (2024-01-19T14:52:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.