TICAL: Trusted and Integrity-protected Compilation of AppLications
- URL: http://arxiv.org/abs/2511.17070v2
- Date: Mon, 24 Nov 2025 09:28:48 GMT
- Title: TICAL: Trusted and Integrity-protected Compilation of AppLications
- Authors: Robert Krahn, Nikson Kanti Paul, Franz Gregor, Do Le Quoc, Andrey Brito, André Martin, Christof Fetzer,
- Abstract summary: Tical is a framework for trusted compilation that provides integrity protection and confidentiality in build pipelines from source code to the final executable.<n>Our approach harnesses TEEs as runtime protection but enriches TEEs with file system shielding and an immutable audit log with version history to provide accountability.<n>Our evaluation shows that Tical can protect the confidentiality and integrity of whole CI/CD pipelines with an acceptable performance overhead.
- Score: 0.24919281650930603
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: During the past few years, we have witnessed various efforts to provide confidentiality and integrity for applications running in untrusted environments such as public clouds. In most of these approaches, hardware extensions such as Intel SGX, TDX, AMD SEV, etc., are leveraged to provide encryption and integrity protection on process or VM level. Although all of these approaches increase the trust in the application at runtime, an often overlooked aspect is the integrity and confidentiality protection at build time, which is equally important as maliciously injected code during compilation can compromise the entire application and system. In this paper, we present Tical, a practical framework for trusted compilation that provides integrity protection and confidentiality in build pipelines from source code to the final executable. Our approach harnesses TEEs as runtime protection but enriches TEEs with file system shielding and an immutable audit log with version history to provide accountability. This way, we can ensure that the compiler chain can only access trusted files and intermediate output, such as object files produced by trusted processes. Our evaluation using micro- and macro-benchmarks shows that Tical can protect the confidentiality and integrity of whole CI/CD pipelines with an acceptable performance overhead.
Related papers
- Sharing is caring: Attestable and Trusted Workflows out of Distrustful Components [5.561558661997071]
We present Mica, a confidential computing architecture that decouples confidentiality from trust.<n>Mica provides tenants with explicit mechanisms to define, restrict, and attest all communication paths between components.<n>Our evaluation shows that Mica supports realistic cloud pipelines with only a small increase to the trusted computing base.
arXiv Detail & Related papers (2026-03-03T14:53:48Z) - RealSec-bench: A Benchmark for Evaluating Secure Code Generation in Real-World Repositories [58.32028251925354]
Large Language Models (LLMs) have demonstrated remarkable capabilities in code generation, but their proficiency in producing secure code remains a critical, under-explored area.<n>We introduce RealSec-bench, a new benchmark for secure code generation meticulously constructed from real-world, high-risk Java repositories.
arXiv Detail & Related papers (2026-01-30T08:29:01Z) - CaMeLs Can Use Computers Too: System-level Security for Computer Use Agents [60.98294016925157]
AI agents are vulnerable to prompt injection attacks, where malicious content hijacks agent behavior to steal credentials or cause financial loss.<n>We introduce Single-Shot Planning for CUAs, where a trusted planner generates a complete execution graph with conditional branches before any observation of potentially malicious content.<n>Although this architectural isolation successfully prevents instruction injections, we show that additional measures are needed to prevent Branch Steering attacks.
arXiv Detail & Related papers (2026-01-14T23:06:35Z) - Towards Verifiably Safe Tool Use for LLM Agents [53.55621104327779]
Large language model (LLM)-based AI agents extend capabilities by enabling access to tools such as data sources, APIs, search engines, code sandboxes, and even other agents.<n>LLMs may invoke unintended tool interactions and introduce risks, such as leaking sensitive data or overwriting critical records.<n>Current approaches to mitigate these risks, such as model-based safeguards, enhance agents' reliability but cannot guarantee system safety.
arXiv Detail & Related papers (2026-01-12T21:31:38Z) - ReliabilityRAG: Effective and Provably Robust Defense for RAG-based Web-Search [69.60882125603133]
We present ReliabilityRAG, a framework for adversarial robustness that explicitly leverages reliability information of retrieved documents.<n>Our work is a significant step towards more effective, provably robust defenses against retrieved corpus corruption in RAG.
arXiv Detail & Related papers (2025-09-27T22:36:42Z) - Reinforcing Secure Live Migration through Verifiable State Management [1.6204399921642334]
We present TALOS, a lightweight framework for verifiable state management and trustworthy application migration.<n> TALOS integrates memory introspection and control-flow graph extraction, enabling robust verification of state continuity and execution flow.<n>Thereby achieving strong security guarantees while maintaining efficiency, making it suitable for decentralized settings.
arXiv Detail & Related papers (2025-09-05T14:41:48Z) - Safe Sharing of Fast Kernel-Bypass I/O Among Nontrusting Applications [1.4273866043218153]
protected user-level libraries have been proposed as a way to allow mutually distrusting applications to safely share kernel-bypass services.<n>We show how to move waits outside the library itself, enabling synchronous interaction among processes without the need for polling.<n>We present a set of safety performance guidelines for developers of protected libraries, and a set of recommendations for developers of future protected library operating systems.
arXiv Detail & Related papers (2025-09-02T23:53:41Z) - A.S.E: A Repository-Level Benchmark for Evaluating Security in AI-Generated Code [49.009041488527544]
A.S.E is a repository-level evaluation benchmark for assessing the security of AI-generated code.<n>Current large language models (LLMs) still struggle with secure coding.<n>A larger reasoning budget does not necessarily lead to better code generation.
arXiv Detail & Related papers (2025-08-25T15:11:11Z) - Balancing Confidentiality and Transparency for Blockchain-based Process-Aware Information Systems [43.253676241213626]
We propose an architecture for blockchain-based PAISs to preserve confidentiality and transparency.<n>Smart contracts enact, enforce and store public interactions, while attribute-based encryption techniques are adopted to specify access grants to confidential information.<n>We assess the security of our solution through a systematic threat model analysis and evaluate its practical feasibility.
arXiv Detail & Related papers (2024-12-07T20:18:36Z) - Codev-Bench: How Do LLMs Understand Developer-Centric Code Completion? [60.84912551069379]
We present the Code-Development Benchmark (Codev-Bench), a fine-grained, real-world, repository-level, and developer-centric evaluation framework.
Codev-Agent is an agent-based system that automates repository crawling, constructs execution environments, extracts dynamic calling chains from existing unit tests, and generates new test samples to avoid data leakage.
arXiv Detail & Related papers (2024-10-02T09:11:10Z) - CRISP: Confidentiality, Rollback, and Integrity Storage Protection for Confidential Cloud-Native Computing [0.757843972001219]
Cloud-native applications rely on orchestration and have their services frequently restarted.
During restarts, attackers can revert the state of confidential services to a previous version that may aid their malicious intent.
This paper presents CRISP, a rollback protection mechanism that uses an existing runtime for Intel SGX and transparently prevents rollback.
arXiv Detail & Related papers (2024-08-13T11:29:30Z) - HasTEE+ : Confidential Cloud Computing and Analytics with Haskell [50.994023665559496]
Confidential computing enables the protection of confidential code and data in a co-tenanted cloud deployment using specialized hardware isolation units called Trusted Execution Environments (TEEs)
TEEs offer low-level C/C++-based toolchains that are susceptible to inherent memory safety vulnerabilities and lack language constructs to monitor explicit and implicit information-flow leaks.
We address the above with HasTEE+, a domain-specific language (cla) embedded in Haskell that enables programming TEEs in a high-level language with strong type-safety.
arXiv Detail & Related papers (2024-01-17T00:56:23Z) - Privacy-Preserving Machine Learning in Untrusted Clouds Made Simple [2.3518279773643287]
We present a practical framework to deploy privacy-preserving machine learning applications in untrusted clouds.
We shield unmodified PyTorch ML applications by running them in Intel SGX enclaves with model parameters and encrypted input data.
Our approach is completely transparent to the machine learning application: the developer and the end-user do not need to modify the application in any way.
arXiv Detail & Related papers (2020-09-09T16:16:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.