Composable Attestation: A Generalized Framework for Continuous and Incremental Trust in AI-Driven Distributed Systems
- URL: http://arxiv.org/abs/2603.02451v1
- Date: Mon, 02 Mar 2026 22:45:26 GMT
- Title: Composable Attestation: A Generalized Framework for Continuous and Incremental Trust in AI-Driven Distributed Systems
- Authors: Sheng Sun, Sarah Evans,
- Abstract summary: This paper presents composable attestation as a generalized cryptographic framework for Continuous and Incremental Trust in Distributed Systems.<n>We establish a rigorous mathematical foundation which is defining core properties of such attestation systems.<n>The framework's utility extends to applications such as secure AI model integrity verification, federated learning, and runtime trust assurance.
- Score: 4.2822349607372265
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents composable attestation as a generalized cryptographic framework for Continuous and Incremental Trust in Distributed Systems,such as Artificial Intelligence (AI) computation, and Open Source Software (OSS) supply chain verification. We establish a rigorous mathematical foundation which is defining core properties of such attestation systems: composability, order independence, transitivity, determinism, inclusion, and dynamic component verification. In contrast to traditional attestation methodologies relying on monolithic verification, composable attestation facilitates modular, scalable, and cryptographically secured integrity verification adaptable to evolving system configurations. This work introduces generalized attestation proof generation and verification functions, implementable via a variety of cryptographic constructions, in which Merkle trees plays vital role in constructing the composable attestation proof. Alternative constructions, including accumulator-based schemes and multi-signature approaches, are also explored, each presenting distinct trade-offs in performance, security, and functionality. Formal analysis demonstrates the adherence of these implementations to the fundamental properties . The framework's utility extends to applications such as secure AI model integrity verification , federated learning, and runtime trust assurance. The concept of attestation inclusion is introduced, permitting incremental integration of new components without necessitating full system re-attestation. This generalized approach reinforce trust in AI computation and broader distributed computing environments through cryptographically verifiable proof mechanisms, building upon foundational concepts of bootstrapping trust.
Related papers
- Modelling Trust and Trusted Systems: A Category Theoretic Approach [0.0]
We formalize elements, claims, results, and decisions as objects within a category.<n>The framework provides a rigorous approach to understanding trust establishment.<n>We present a number of worked examples including boot-run-shutdown sequences and Evil Maid attacks.
arXiv Detail & Related papers (2026-02-11T21:08:51Z) - Binding Agent ID: Unleashing the Power of AI Agents with accountability and credibility [46.323590135279126]
BAID (Binding Agent ID) is a comprehensive identity infrastructure establishing verifiable user-code binding.<n>We implement and evaluate a complete prototype system, demonstrating the practical feasibility of blockchain-based identity management and zkVM-based authentication protocol.
arXiv Detail & Related papers (2025-12-19T13:01:54Z) - Context Lineage Assurance for Non-Human Identities in Critical Multi-Agent Systems [0.08316523707191924]
We introduce a cryptographically grounded mechanism for lineage verification, anchored in append-only Merkle tree structures.<n>Unlike traditional A2A models that primarily secure point-to-point interactions, our approach enables both agents and external verifiers to cryptographically validate multi-hop provenance.<n>In parallel, we augment the A2A agent card to incorporate explicit identity verification primitives, enabling both peer agents and human approvers to authenticate the legitimacy of NHI representations.
arXiv Detail & Related papers (2025-09-22T20:59:51Z) - Safe and Certifiable AI Systems: Concepts, Challenges, and Lessons Learned [45.44933002008943]
This white paper presents the T"UV AUSTRIA Trusted AI framework.<n>It is an end-to-end audit catalog and methodology for assessing and certifying machine learning systems.<n>Building on three pillars - Secure Software Development, Functional Requirements, and Ethics & Data Privacy - it translates the high-level obligations of the EU AI Act into specific, testable criteria.
arXiv Detail & Related papers (2025-09-08T17:52:08Z) - Zero-Trust Foundation Models: A New Paradigm for Secure and Collaborative Artificial Intelligence for Internet of Things [61.43014629640404]
Zero-Trust Foundation Models (ZTFMs) embed zero-trust security principles into the lifecycle of foundation models (FMs) for Internet of Things (IoT) systems.<n>ZTFMs can enable secure, privacy-preserving AI across distributed, heterogeneous, and potentially adversarial IoT environments.
arXiv Detail & Related papers (2025-05-26T06:44:31Z) - Probabilistic Bisimulation for Parameterized Anonymity and Uniformity Verification [5.806034991979994]
Bisimulation is crucial for verifying process equivalence in probabilistic systems.<n>This paper presents a novel framework for analyzing bisimulation in infinite families of finite-state probabilistic systems.<n>We show that essential properties like anonymity and uniformity can be encoded and verified within this framework.
arXiv Detail & Related papers (2025-05-15T04:56:53Z) - Confidence Estimation via Sequential Likelihood Mixing [46.69347918899963]
We present a universal framework for constructing confidence sets based on sequential likelihood mixing.<n>We establish fundamental connections between sequential mixing, Bayesian inference and regret inequalities from online estimation.<n>We illustrate the power of the framework by deriving tighter confidence sequences for classical settings.
arXiv Detail & Related papers (2025-02-20T16:16:34Z) - A Framework for the Security and Privacy of Biometric System Constructions under Defined Computational Assumptions [1.5446015139136167]
This paper introduces a formal framework for constructing secure and privacy-preserving biometric systems.
By leveraging the principles of universal composability, we enable the modular analysis and verification of individual system components.
arXiv Detail & Related papers (2024-11-26T11:10:11Z) - Multi-modal biometric authentication: Leveraging shared layer architectures for enhanced security [0.0]
We introduce a novel multi-modal biometric authentication system that integrates facial, vocal, and signature data to enhance security measures.
Our model architecture incorporates dual shared layers alongside modality-specific enhancements for comprehensive feature extraction.
Our approach demonstrates significant improvements in authentication accuracy and robustness, paving the way for advanced secure identity verification solutions.
arXiv Detail & Related papers (2024-11-04T14:27:10Z) - Trusted Multi-View Classification with Dynamic Evidential Fusion [73.35990456162745]
We propose a novel multi-view classification algorithm, termed trusted multi-view classification (TMC)
TMC provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level.
Both theoretical and experimental results validate the effectiveness of the proposed model in accuracy, robustness and trustworthiness.
arXiv Detail & Related papers (2022-04-25T03:48:49Z) - Joint Differentiable Optimization and Verification for Certified
Reinforcement Learning [91.93635157885055]
In model-based reinforcement learning for safety-critical control systems, it is important to formally certify system properties.
We propose a framework that jointly conducts reinforcement learning and formal verification.
arXiv Detail & Related papers (2022-01-28T16:53:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.