Citadel: Real-World Hardware-Software Contracts for Secure Enclaves Through Microarchitectural Isolation and Controlled Speculation
- URL: http://arxiv.org/abs/2306.14882v4
- Date: Wed, 8 May 2024 18:07:03 GMT
- Title: Citadel: Real-World Hardware-Software Contracts for Secure Enclaves Through Microarchitectural Isolation and Controlled Speculation
- Authors: Jules Drean, Miguel Gomez-Garcia, Fisher Jepsen, Thomas Bourgeat, Srinivas Devadas,
- Abstract summary: Hardware isolation primitives such as secure enclaves aim to protect programs, but remain vulnerable to transient execution attacks.
This paper advocates for processors to incorporate microarchitectural isolation primitives and mechanisms for controlled speculation.
We introduce two mechanisms to securely share memory between an enclave and an untrusted OS in an out-of-order processor.
- Score: 8.414722884952525
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hardware isolation primitives such as secure enclaves aim to protect sensitive programs, but remain vulnerable to transient execution attacks. Complete microarchitectural isolation is not a satisfactory defense mechanism as it leaves out public shared memory, critical for usability and application performance. Conversely, hardware-software co-designs for secure speculation can counter these attacks but are not yet practical, since they make assumptions on the speculation modes, the exposed microarchitectural state, and the software, which are all hard to support for the entire software stack. This paper advocates for processors to incorporate microarchitectural isolation primitives and mechanisms for controlled speculation, enabling different execution modes. These modes can restrict what is exposed to an attacker, effectively balancing performance and program-analysis complexity. We introduce two mechanisms to securely share memory between an enclave and an untrusted OS in an out-of-order processor. We show that our two modes are complementary, achieving speculative non-interference with a reasonable performance impact, while requiring minimal code annotation and simple program analysis doable by hand. Our prototype, Citadel, is a multicore processor running on an FPGA, booting untrusted Linux, and supporting comprehensive enclave capabilities, such as shared memory, and remote attestation. To our knowledge, Citadel is the first end-to-end enclave platform to run secure applications, such as cryptographic libraries or small private inference workloads, on a speculative out-of-order multicore processor while protecting against a significant class of side-channel attacks.
Related papers
- A Scheduling-Aware Defense Against Prefetching-Based Side-Channel Attacks [16.896693436047137]
Speculative loading of memory, called prefetching, is common in real-world CPUs.
Prefetching can be exploited to bypass process isolation and leak secrets, such as keys used in RSA, AES, and ECDH implementations.
We implement our countermeasure for an x86_64 and an ARM processor.
arXiv Detail & Related papers (2024-10-01T07:12:23Z) - Compromising Embodied Agents with Contextual Backdoor Attacks [69.71630408822767]
Large language models (LLMs) have transformed the development of embodied intelligence.
This paper uncovers a significant backdoor security threat within this process.
By poisoning just a few contextual demonstrations, attackers can covertly compromise the contextual environment of a black-box LLM.
arXiv Detail & Related papers (2024-08-06T01:20:12Z) - HasTEE+ : Confidential Cloud Computing and Analytics with Haskell [50.994023665559496]
Confidential computing enables the protection of confidential code and data in a co-tenanted cloud deployment using specialized hardware isolation units called Trusted Execution Environments (TEEs)
TEEs offer low-level C/C++-based toolchains that are susceptible to inherent memory safety vulnerabilities and lack language constructs to monitor explicit and implicit information-flow leaks.
We address the above with HasTEE+, a domain-specific language (cla) embedded in Haskell that enables programming TEEs in a high-level language with strong type-safety.
arXiv Detail & Related papers (2024-01-17T00:56:23Z) - Secure Synthesis of Distributed Cryptographic Applications (Technical Report) [1.9707603524984119]
We advocate using secure program partitioning to synthesize cryptographic applications.
This approach is promising, but formal results for the security of such compilers are limited in scope.
We develop a compiler security proof that handles subtleties essential for robust, efficient applications.
arXiv Detail & Related papers (2024-01-06T02:57:44Z) - Safety and Performance, Why Not Both? Bi-Objective Optimized Model
Compression against Heterogeneous Attacks Toward AI Software Deployment [15.803413192172037]
We propose a test-driven sparse training framework called SafeCompress.
By simulating the attack mechanism as safety testing, SafeCompress can automatically compress a big model to a small one.
We conduct extensive experiments on five datasets for both computer vision and natural language processing tasks.
arXiv Detail & Related papers (2024-01-02T02:31:36Z) - Code Polymorphism Meets Code Encryption: Confidentiality and Side-Channel Protection of Software Components [0.0]
PolEn is a toolchain and a processor architecturethat combine countermeasures in order to provide an effective mitigation of side-channel attacks.
Code encryption is supported by a processor extension such that machineinstructions are only decrypted inside the CPU.
Code polymorphism is implemented by software means. It regularly changes the observablebehaviour of the program, making it unpredictable for an attacker.
arXiv Detail & Related papers (2023-10-11T09:16:10Z) - Evil from Within: Machine Learning Backdoors through Hardware Trojans [72.99519529521919]
Backdoors pose a serious threat to machine learning, as they can compromise the integrity of security-critical systems, such as self-driving cars.
We introduce a backdoor attack that completely resides within a common hardware accelerator for machine learning.
We demonstrate the practical feasibility of our attack by implanting our hardware trojan into the Xilinx Vitis AI DPU.
arXiv Detail & Related papers (2023-04-17T16:24:48Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Safety and Performance, Why not Both? Bi-Objective Optimized Model
Compression toward AI Software Deployment [12.153709321048947]
AI software compression plays a crucial role, which aims to compress model size while keeping high performance.
In this paper, we try to address the safe model compression problem from a safety-performance co-optimization perspective.
Specifically, inspired by the test-driven development (TDD) paradigm in software engineering, we propose a test-driven sparse training framework called SafeCompress.
arXiv Detail & Related papers (2022-08-11T04:41:08Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.