A Scheduling-Aware Defense Against Prefetching-Based Side-Channel Attacks
- URL: http://arxiv.org/abs/2410.00452v1
- Date: Tue, 1 Oct 2024 07:12:23 GMT
- Title: A Scheduling-Aware Defense Against Prefetching-Based Side-Channel Attacks
- Authors: Till Schlüter, Nils Ole Tippenhauer,
- Abstract summary: Speculative loading of memory, called prefetching, is common in real-world CPUs.
Prefetching can be exploited to bypass process isolation and leak secrets, such as keys used in RSA, AES, and ECDH implementations.
We implement our countermeasure for an x86_64 and an ARM processor.
- Score: 16.896693436047137
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern computer processors use microarchitectural optimization mechanisms to improve performance. As a downside, such optimizations are prone to introducing side-channel vulnerabilities. Speculative loading of memory, called prefetching, is common in real-world CPUs and may cause such side-channel vulnerabilities: Prior work has shown that it can be exploited to bypass process isolation and leak secrets, such as keys used in RSA, AES, and ECDH implementations. However, to this date, no effective and efficient countermeasure has been presented that secures software on systems with affected prefetchers. In this work, we answer the question: How can a process defend against prefetch-based side channels? We first systematize prefetching-based side-channel vulnerabilities presented in academic literature so far. Next, we design and implement PreFence, a scheduling-aware defense against these side channels that allows processes to disable the prefetcher temporarily during security-critical operations. We implement our countermeasure for an x86_64 and an ARM processor; it can be adapted to any platform that allows to disable the prefetcher. We evaluate our defense and find that our solution reliably stops prefetch leakage. Our countermeasure causes negligible performance impact while no security-relevant code is executed, and its worst case performance is comparable to completely turning off the prefetcher. The expected average performance impact depends on the security-relevant code in the application and can be negligible as we demonstrate with a simple web server application. We expect our countermeasure could widely be integrated in commodity OS, and even be extended to signal generally security-relevant code to the kernel to allow coordinated application of countermeasures.
Related papers
- Libra: Architectural Support For Principled, Secure And Efficient Balanced Execution On High-End Processors (Extended Version) [9.404954747748523]
Control-flow leakage (CFL) attacks enable an attacker to expose control-flow decisions of a victim program via side-channel observations.
Linearization has been widely believed to be the only effective countermeasure against CFL attacks.
We propose Libra, a generic and principled hardware-software codesign to efficiently address CFL on high-end processors.
arXiv Detail & Related papers (2024-09-05T17:56:19Z) - BEEAR: Embedding-based Adversarial Removal of Safety Backdoors in Instruction-tuned Language Models [57.5404308854535]
Safety backdoor attacks in large language models (LLMs) enable the stealthy triggering of unsafe behaviors while evading detection during normal interactions.
We present BEEAR, a mitigation approach leveraging the insight that backdoor triggers induce relatively uniform drifts in the model's embedding space.
Our bi-level optimization method identifies universal embedding perturbations that elicit unwanted behaviors and adjusts the model parameters to reinforce safe behaviors against these perturbations.
arXiv Detail & Related papers (2024-06-24T19:29:47Z) - Fully Randomized Pointers [7.1754940591892735]
We propose Fully Randomized Pointers (FRP) as a stronger memory error defense that is resistant to even brute force attacks.
The key idea is to fully randomize pointer bits -- as much as possible -- while also preserving binary compatibility.
We show that FRP is secure, practical, and compatible at the binary level, while a hardware implementation can achieve low performance overheads.
arXiv Detail & Related papers (2024-05-21T05:54:27Z) - Mitigating Fine-tuning based Jailbreak Attack with Backdoor Enhanced Safety Alignment [56.2017039028998]
Fine-tuning of Language-Model-as-a-Service (LM) introduces new threats, particularly against the Fine-tuning based Jailbreak Attack (FJAttack)
We propose the Backdoor Enhanced Safety Alignment method inspired by an analogy with the concept of backdoor attacks.
Our comprehensive experiments demonstrate that through the Backdoor Enhanced Safety Alignment with adding as few as 11 safety examples, the maliciously finetuned LLMs will achieve similar safety performance as the original aligned models without harming the benign performance.
arXiv Detail & Related papers (2024-02-22T21:05:18Z) - Model Supply Chain Poisoning: Backdooring Pre-trained Models via Embedding Indistinguishability [61.549465258257115]
We propose a novel and severer backdoor attack, TransTroj, which enables the backdoors embedded in PTMs to efficiently transfer in the model supply chain.
Experimental results show that our method significantly outperforms SOTA task-agnostic backdoor attacks.
arXiv Detail & Related papers (2024-01-29T04:35:48Z) - DECLASSIFLOW: A Static Analysis for Modeling Non-Speculative Knowledge to Relax Speculative Execution Security Measures (Full Version) [9.816078445230305]
Speculative execution attacks undermine the security of constant-time programming.
This paper proposes DECLASSIFLOW to efficiently protect constant-time code from speculative leakage.
arXiv Detail & Related papers (2023-12-14T21:00:20Z) - Code Polymorphism Meets Code Encryption: Confidentiality and Side-Channel Protection of Software Components [0.0]
PolEn is a toolchain and a processor architecturethat combine countermeasures in order to provide an effective mitigation of side-channel attacks.
Code encryption is supported by a processor extension such that machineinstructions are only decrypted inside the CPU.
Code polymorphism is implemented by software means. It regularly changes the observablebehaviour of the program, making it unpredictable for an attacker.
arXiv Detail & Related papers (2023-10-11T09:16:10Z) - Citadel: Real-World Hardware-Software Contracts for Secure Enclaves Through Microarchitectural Isolation and Controlled Speculation [8.414722884952525]
Hardware isolation primitives such as secure enclaves aim to protect programs, but remain vulnerable to transient execution attacks.
This paper advocates for processors to incorporate microarchitectural isolation primitives and mechanisms for controlled speculation.
We introduce two mechanisms to securely share memory between an enclave and an untrusted OS in an out-of-order processor.
arXiv Detail & Related papers (2023-06-26T17:51:23Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.