Citadel: Simple Spectre-Safe Isolation For Real-World Programs That Share Memory
- URL: http://arxiv.org/abs/2306.14882v5
- Date: Fri, 07 Feb 2025 03:04:47 GMT
- Title: Citadel: Simple Spectre-Safe Isolation For Real-World Programs That Share Memory
- Authors: Jules Drean, Miguel Gomez-Garcia, Fisher Jepsen, Thomas Bourgeat, Srinivas Devadas,
- Abstract summary: We introduce a new security property we call relaxed microarchitectural isolation (RMI)
RMI allows sensitive programs that are not-constant-time to share memory with an attacker while restricting the information leakage to that of non-speculative execution.
Our end-to-end prototype, Citadel, consists of an FPGA-based multicore processor that boots Linux and runs secure applications.
- Score: 8.414722884952525
- License:
- Abstract: Transient execution side-channel attacks, such as Spectre, have been shown to break almost all isolation primitives. We introduce a new security property we call relaxed microarchitectural isolation (RMI) that allows sensitive programs that are not-constant-time to share memory with an attacker while restricting the information leakage to that of non-speculative execution. Although this type of speculative security property is typically challenging to enforce, we show that we can leverage the enclave setup to achieve it. In particular, we use microarchitectural isolation to restrict attacker's observations in conjunction with straightforward hardware mechanisms to limit speculation. This new design point presents a compelling trade-off between security, usability, and performance, making it possible to efficiently enforce RMI for any program. We demonstrate our approach by implementing and evaluating two simple defense mechanisms that satisfy RMI: (1) Safe mode, which disables speculative accesses to shared memory, and (2) Burst mode, a localized performance optimization that requires simple program analysis on small code snippets. Our end-to-end prototype, Citadel, consists of an FPGA-based multicore processor that boots Linux and runs secure applications, including cryptographic libraries and private inference, with less than 5% performance overhead.
Related papers
- μRL: Discovering Transient Execution Vulnerabilities Using Reinforcement Learning [4.938372714332782]
We propose using reinforcement learning to address the challenges of discovering microarchitectural vulnerabilities, such as Spectre and Meltdown.
Our RL agents interact with the processor, learning from real-time feedback to prioritize instruction sequences more likely to reveal vulnerabilities.
arXiv Detail & Related papers (2025-02-20T06:42:03Z) - Comprehensive Kernel Safety in the Spectre Era: Mitigations and Performance Evaluation (Extended Version) [2.0436753359071913]
We show that layout randomization offers a comparable safety guarantee in a system with memory separation.
We show that kernel safety cannot be restored for attackers capable of using side-channels and speculative execution.
We introduce enforcement mechanisms that can guarantee speculative kernel safety for safe system calls in the Spectre era.
arXiv Detail & Related papers (2024-11-27T07:06:28Z) - BULKHEAD: Secure, Scalable, and Efficient Kernel Compartmentalization with PKS [16.239598954752594]
Kernel compartmentalization is a promising approach that follows the least-privilege principle.
We present BULKHEAD, a secure, scalable, and efficient kernel compartmentalization technique.
We implement a prototype system on Linux v6.1 to compartmentalize loadable kernel modules.
arXiv Detail & Related papers (2024-09-15T04:11:26Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Jailbreaking as a Reward Misspecification Problem [80.52431374743998]
We propose a novel perspective that attributes this vulnerability to reward misspecification during the alignment process.
We introduce a metric ReGap to quantify the extent of reward misspecification and demonstrate its effectiveness.
We present ReMiss, a system for automated red teaming that generates adversarial prompts in a reward-misspecified space.
arXiv Detail & Related papers (2024-06-20T15:12:27Z) - Beyond Over-Protection: A Targeted Approach to Spectre Mitigation and Performance Optimization [3.4439829486606737]
Speculative load hardening in LLVM protects against leaks by tracking the speculation state and masking values during misspeculation.
We extend an existing side-channel model validation framework, Scam-V, to check the vulnerability of programs to Spectre-PHT attacks and optimize the protection of programs using the slh approach.
arXiv Detail & Related papers (2023-12-15T13:16:50Z) - Code Polymorphism Meets Code Encryption: Confidentiality and Side-Channel Protection of Software Components [0.0]
PolEn is a toolchain and a processor architecturethat combine countermeasures in order to provide an effective mitigation of side-channel attacks.
Code encryption is supported by a processor extension such that machineinstructions are only decrypted inside the CPU.
Code polymorphism is implemented by software means. It regularly changes the observablebehaviour of the program, making it unpredictable for an attacker.
arXiv Detail & Related papers (2023-10-11T09:16:10Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.