Sharing is caring: Attestable and Trusted Workflows out of Distrustful Components
- URL: http://arxiv.org/abs/2603.03403v1
- Date: Tue, 03 Mar 2026 14:53:48 GMT
- Title: Sharing is caring: Attestable and Trusted Workflows out of Distrustful Components
- Authors: Amir Al Sadi, Sina Abdollahi, Adrien Ghosn, Hamed Haddadi, Marios Kogias,
- Abstract summary: We present Mica, a confidential computing architecture that decouples confidentiality from trust.<n>Mica provides tenants with explicit mechanisms to define, restrict, and attest all communication paths between components.<n>Our evaluation shows that Mica supports realistic cloud pipelines with only a small increase to the trusted computing base.
- Score: 5.561558661997071
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Confidential computing protects data in use within Trusted Execution Environments (TEEs), but current TEEs provide little support for secure communication between components. As a result, pipelines of independently developed and deployed TEEs must trust one another to avoid the leakage of sensitive information they exchange -- a fragile assumption that is unrealistic for modern cloud workloads. We present Mica, a confidential computing architecture that decouples confidentiality from trust. Mica provides tenants with explicit mechanisms to define, restrict, and attest all communication paths between components, ensuring that sensitive data cannot leak through shared resources or interactions. We implement Mica on Arm CCA using existing primitives, requiring only modest changes to the trusted computing base. Our extension adds a policy language to control and attest communication paths among Realms and with the untrusted world via shared protected and unprotected memory and control transfers. Our evaluation shows that Mica supports realistic cloud pipelines with only a small increase to the trusted computing base while providing strong, attestable confidentiality guarantees.
Related papers
- Towards Verifiably Safe Tool Use for LLM Agents [53.55621104327779]
Large language model (LLM)-based AI agents extend capabilities by enabling access to tools such as data sources, APIs, search engines, code sandboxes, and even other agents.<n>LLMs may invoke unintended tool interactions and introduce risks, such as leaking sensitive data or overwriting critical records.<n>Current approaches to mitigate these risks, such as model-based safeguards, enhance agents' reliability but cannot guarantee system safety.
arXiv Detail & Related papers (2026-01-12T21:31:38Z) - MemTrust: A Zero-Trust Architecture for Unified AI Memory System [1.6221135438213565]
centralization creates a trust crisis where users must entrust cloud providers with sensitive digital memory data.<n>We propose a five-layer architecture abstracting common functional components of AI memory systems.<n>Based on this, we design MemTrust, a hardware-backed zero-trust architecture that provides cryptographic guarantees across all layers.
arXiv Detail & Related papers (2026-01-11T17:37:33Z) - Securing Generative AI in Healthcare: A Zero-Trust Architecture Powered by Confidential Computing on Google Cloud [0.0]
Confidential Zero-Trust Framework (CZF) is a security paradigm that combines Zero-Trust Architecture for granular access control with the hardware-enforced data isolation of Confidential Computing.<n>CZF provides a defense-in-depth architecture where data remains encrypted while in-use within a hardware-based Trusted Execution Environment.
arXiv Detail & Related papers (2025-11-14T19:56:52Z) - The Sum Leaks More Than Its Parts: Compositional Privacy Risks and Mitigations in Multi-Agent Collaboration [72.33801123508145]
Large language models (LLMs) are integral to multi-agent systems.<n>Privacy risks emerge that extend beyond memorization, direct inference, or single-turn evaluations.<n>In particular, seemingly innocuous responses, when composed across interactions, can cumulatively enable adversaries to recover sensitive information.
arXiv Detail & Related papers (2025-09-16T16:57:25Z) - Zero-Trust Foundation Models: A New Paradigm for Secure and Collaborative Artificial Intelligence for Internet of Things [61.43014629640404]
Zero-Trust Foundation Models (ZTFMs) embed zero-trust security principles into the lifecycle of foundation models (FMs) for Internet of Things (IoT) systems.<n>ZTFMs can enable secure, privacy-preserving AI across distributed, heterogeneous, and potentially adversarial IoT environments.
arXiv Detail & Related papers (2025-05-26T06:44:31Z) - CCxTrust: Confidential Computing Platform Based on TEE and TPM Collaborative Trust [8.505898774648989]
reliance on a single hardware root of trust (RoT) limits user confidence in cloud platforms.<n>Lack of interoperability and a unified trust model in multi-cloud environments prevents the establishment of a cross-platform, cross-cloud chain of trust.<n>This paper proposes CCxTrust, a confidential computing platform leveraging collaborative roots of trust from TEE and TPM.
arXiv Detail & Related papers (2024-12-05T03:12:49Z) - ACRIC: Securing Legacy Communication Networks via Authenticated Cyclic Redundancy Integrity Check [98.34702864029796]
Recent security incidents in safety-critical industries exposed how the lack of proper message authentication enables attackers to inject malicious commands or alter system behavior.<n>These shortcomings have prompted new regulations that emphasize the pressing need to strengthen cybersecurity.<n>We introduce ACRIC, a message authentication solution to secure legacy industrial communications.
arXiv Detail & Related papers (2024-11-21T18:26:05Z) - Authentication and identity management based on zero trust security model in micro-cloud environment [0.0]
The Zero Trust framework can better track and block external attackers while limiting security breaches resulting from insider attacks in the cloud paradigm.
This paper focuses on authentication mechanisms, calculation of trust score, and generation of policies in order to establish required access control to resources.
arXiv Detail & Related papers (2024-10-29T09:06:13Z) - HasTEE+ : Confidential Cloud Computing and Analytics with Haskell [50.994023665559496]
Confidential computing enables the protection of confidential code and data in a co-tenanted cloud deployment using specialized hardware isolation units called Trusted Execution Environments (TEEs)
TEEs offer low-level C/C++-based toolchains that are susceptible to inherent memory safety vulnerabilities and lack language constructs to monitor explicit and implicit information-flow leaks.
We address the above with HasTEE+, a domain-specific language (cla) embedded in Haskell that enables programming TEEs in a high-level language with strong type-safety.
arXiv Detail & Related papers (2024-01-17T00:56:23Z) - Putting a Padlock on Lambda -- Integrating vTPMs into AWS Firecracker [49.1574468325115]
Software services place implicit trust in the cloud provider, without an explicit trust relationship.
There is currently no cloud provider that exposes Trusted Platform Module capabilities.
We improve trust by integrating a virtual TPM device into the Firecracker, originally developed by Amazon Web Services.
arXiv Detail & Related papers (2023-10-05T13:13:55Z) - SyzTrust: State-aware Fuzzing on Trusted OS Designed for IoT Devices [67.65883495888258]
We present SyzTrust, the first state-aware fuzzing framework for vetting the security of resource-limited Trusted OSes.
SyzTrust adopts a hardware-assisted framework to enable fuzzing Trusted OSes directly on IoT devices.
We evaluate SyzTrust on Trusted OSes from three major vendors: Samsung, Tsinglink Cloud, and Ali Cloud.
arXiv Detail & Related papers (2023-09-26T08:11:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.