Towards Trusted Service Monitoring: Verifiable Service Level Agreements
- URL: http://arxiv.org/abs/2510.13370v2
- Date: Wed, 29 Oct 2025 15:51:12 GMT
- Title: Towards Trusted Service Monitoring: Verifiable Service Level Agreements
- Authors: Fernando Castillo, Eduardo Brito, Sebastian Werner, Pille Pullonen-Raudvere, Jonathan Heiss,
- Abstract summary: Service Level Agreement (SLA) monitoring in service-oriented environments suffers from inherent trust conflicts when providers self-report metrics.<n>We introduce a framework for generating verifiable SLA violation claims through trusted hardware monitors and zero-knowledge proofs.
- Score: 37.76528129929801
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Service Level Agreement (SLA) monitoring in service-oriented environments suffers from inherent trust conflicts when providers self-report metrics, creating incentives to underreport violations. We introduce a framework for generating verifiable SLA violation claims through trusted hardware monitors and zero-knowledge proofs, establishing cryptographic foundations for genuine trustworthiness in service ecosystems. Our approach starts with machine-readable SLA clauses converted into verifiable predicates and monitored within Trusted Execution Environments. These monitors collect timestamped telemetry, organize measurements into Merkle trees, and produce signed attestations. Zero-knowledge proofs aggregate Service-Level Indicators to evaluate compliance, generating cryptographic proofs verifiable by stakeholders, arbitrators, or insurers in disputes, without accessing underlying data. This ensures three security properties: integrity, authenticity, and validity. Our prototype demonstrates linear scaling up to over 1 million events per hour for measurements with near constant-time proof generation and verification for single violation claims, enabling trustless SLA enforcement through cryptographic guarantees for automated compliance verification in service monitoring.
Related papers
- IMMACULATE: A Practical LLM Auditing Framework via Verifiable Computation [49.796717294455796]
We present IMMACULATE, a practical auditing framework that detects economically motivated deviations.<n>IMMACULATE selectively audits a small fraction of requests using verifiable computation, achieving strong detection guarantees while amortizing cryptographic overhead.
arXiv Detail & Related papers (2026-02-26T07:21:02Z) - Detecting Object Tracking Failure via Sequential Hypothesis Testing [80.7891291021747]
Real-time online object tracking in videos constitutes a core task in computer vision.<n>We propose interpreting object tracking as a sequential hypothesis test, wherein evidence for or against tracking failures is gradually accumulated over time.<n>We propose both supervised and unsupervised variants by leveraging either ground-truth or solely internal tracking information.
arXiv Detail & Related papers (2026-02-13T14:57:15Z) - Towards Verifiably Safe Tool Use for LLM Agents [53.55621104327779]
Large language model (LLM)-based AI agents extend capabilities by enabling access to tools such as data sources, APIs, search engines, code sandboxes, and even other agents.<n>LLMs may invoke unintended tool interactions and introduce risks, such as leaking sensitive data or overwriting critical records.<n>Current approaches to mitigate these risks, such as model-based safeguards, enhance agents' reliability but cannot guarantee system safety.
arXiv Detail & Related papers (2026-01-12T21:31:38Z) - Verification of Lightning Network Channel Balances with Trusted Execution Environments (TEE) [0.05330327625867509]
This paper introduces a methodology for the verification of LN channel balances.<n>The core contribution is a framework that combines Trusted Execution Environments (TEEs) with Zero-Knowledge Transport Layer Security (zkTLS) to provide strong, hardware-backed guarantees.
arXiv Detail & Related papers (2025-12-12T23:55:12Z) - Zero-Knowledge Audit for Internet of Agents: Privacy-Preserving Communication Verification with Model Context Protocol [2.503043323723241]
We introduce a framework for auditing agent communications that keeps messages private while still checking they follow expected rules.<n>It pairs zero-knowledge proofs with the existing Model Context Protocol (MCP) so messages can be verified without revealing their contents.<n>We show that zk-MCP provides data authenticity and communication privacy, achieving efficient verification with negligible latency overhead.
arXiv Detail & Related papers (2025-12-11T19:18:07Z) - VeriGuard: Enhancing LLM Agent Safety via Verified Code Generation [40.594947933580464]
The deployment of autonomous AI agents in sensitive domains, such as healthcare, introduces critical risks to safety, security, and privacy.<n>We introduce VeriGuard, a novel framework that provides formal safety guarantees for LLM-based agents.
arXiv Detail & Related papers (2025-10-03T04:11:43Z) - Context Lineage Assurance for Non-Human Identities in Critical Multi-Agent Systems [0.08316523707191924]
We introduce a cryptographically grounded mechanism for lineage verification, anchored in append-only Merkle tree structures.<n>Unlike traditional A2A models that primarily secure point-to-point interactions, our approach enables both agents and external verifiers to cryptographically validate multi-hop provenance.<n>In parallel, we augment the A2A agent card to incorporate explicit identity verification primitives, enabling both peer agents and human approvers to authenticate the legitimacy of NHI representations.
arXiv Detail & Related papers (2025-09-22T20:59:51Z) - Confidential Guardian: Cryptographically Prohibiting the Abuse of Model Abstention [65.47632669243657]
A dishonest institution can exploit mechanisms to discriminate or unjustly deny services under the guise of uncertainty.<n>We demonstrate the practicality of this threat by introducing an uncertainty-inducing attack called Mirage.<n>We propose Confidential Guardian, a framework that analyzes calibration metrics on a reference dataset to detect artificially suppressed confidence.
arXiv Detail & Related papers (2025-05-29T19:47:50Z) - A Label-Free Heterophily-Guided Approach for Unsupervised Graph Fraud Detection [60.09453163562244]
We propose a Heterophily-guided Unsupervised Graph fraud dEtection approach (HUGE) for unsupervised GFD.<n>In the estimation module, we design a novel label-free heterophily metric called HALO, which captures the critical graph properties for GFD.<n>In the alignment-based fraud detection module, we develop a joint-GNN architecture with ranking loss and asymmetric alignment loss.
arXiv Detail & Related papers (2025-02-18T22:07:36Z) - Formal Verification of Permission Voucher [1.4732811715354452]
The Permission Voucher Protocol is a system designed for secure and authenticated access control in distributed environments.<n>The analysis employs the Tamarin Prover, a state-of-the-art tool for symbolic verification, to evaluate key security properties.<n>Results confirm the protocol's robustness against common attacks such as message tampering, impersonation, and replay.
arXiv Detail & Related papers (2024-12-18T14:11:50Z) - Agora: Trust Less and Open More in Verification for Confidential Computing [19.05703756097075]
We introduce a novel binary verification service, AGORA, scrupulously designed to overcome the challenge.<n>Certain tasks can be delegated to untrusted entities, while the corresponding validators are securely housed within the trusted computing base.<n>Through a novel blockchain-based bounty task manager, it also utilizes crowdsourcing to remove trust in theorem provers.
arXiv Detail & Related papers (2024-07-21T05:29:22Z) - FedGT: Identification of Malicious Clients in Federated Learning with Secure Aggregation [69.75513501757628]
FedGT is a novel framework for identifying malicious clients in federated learning with secure aggregation.
We show that FedGT significantly outperforms the private robust aggregation approach based on the geometric median recently proposed by Pillutla et al.
arXiv Detail & Related papers (2023-05-09T14:54:59Z) - Confidence Composition for Monitors of Verification Assumptions [3.500426151907193]
We propose a three-step framework for monitoring the confidence in verification assumptions.
In two case studies, we demonstrate that the composed monitors improve over their constituents and successfully predict safety violations.
arXiv Detail & Related papers (2021-11-03T18:14:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.