Beyond Over-Protection: A Targeted Approach to Spectre Mitigation and Performance Optimization
- URL: http://arxiv.org/abs/2312.09770v1
- Date: Fri, 15 Dec 2023 13:16:50 GMT
- Title: Beyond Over-Protection: A Targeted Approach to Spectre Mitigation and Performance Optimization
- Authors: Tiziano Marinaro, Pablo Buiras, Andreas Lindner, Roberto Guanciale, Hamed Nemati,
- Abstract summary: Speculative load hardening in LLVM protects against leaks by tracking the speculation state and masking values during misspeculation.
We extend an existing side-channel model validation framework, Scam-V, to check the vulnerability of programs to Spectre-PHT attacks and optimize the protection of programs using the slh approach.
- Score: 3.4439829486606737
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since the advent of Spectre attacks, researchers and practitioners have developed a range of hardware and software measures to counter transient execution attacks. A prime example of such mitigation is speculative load hardening in LLVM, which protects against leaks by tracking the speculation state and masking values during misspeculation. LLVM relies on static analysis to harden programs using slh that often results in over-protection, which incurs performance overhead. We extended an existing side-channel model validation framework, Scam-V, to check the vulnerability of programs to Spectre-PHT attacks and optimize the protection of programs using the slh approach. We illustrate the efficacy of Scam-V by first demonstrating that it can automatically identify Spectre vulnerabilities in real programs, e.g., fragments of crypto-libraries. We then develop an optimization mechanism that validates the necessity of slh hardening w.r.t. the target platform. Our experiments showed that hardening introduced by LLVM in most cases could be significantly improved when the underlying microarchitecture properties are considered.
Related papers
- Attention Tracker: Detecting Prompt Injection Attacks in LLMs [62.247841717696765]
Large Language Models (LLMs) have revolutionized various domains but remain vulnerable to prompt injection attacks.
We introduce the concept of the distraction effect, where specific attention heads shift focus from the original instruction to the injected instruction.
We propose Attention Tracker, a training-free detection method that tracks attention patterns on instruction to detect prompt injection attacks.
arXiv Detail & Related papers (2024-11-01T04:05:59Z) - LightSLH: Provable and Low-Overhead Spectre v1 Mitigation through Targeted Instruction Hardening [14.99532960317865]
We propose LightSLH, designed to mitigate this overhead by hardening instructions only when they are under threat from Spectre vulnerabilities.
LightSLH leverages program analysis techniques based on abstract interpretation to identify all instructions that could potentially lead to Spectre vulnerabilities and provides provable protection.
We demonstrate the security guarantees of LightSLH and evaluate its performance on cryptographic algorithm implementations from OpenSSL.
arXiv Detail & Related papers (2024-08-29T02:31:28Z) - Improving Adversarial Robustness in Android Malware Detection by Reducing the Impact of Spurious Correlations [3.7937308360299116]
Machine learning (ML) has demonstrated significant advancements in Android malware detection (AMD)
However, the resilience of ML against realistic evasion attacks remains a major obstacle for AMD.
In this study, we propose a domain adaptation technique to improve the generalizability of AMD by aligning the distribution of malware samples and AEs.
arXiv Detail & Related papers (2024-08-27T17:01:12Z) - Jailbreaking as a Reward Misspecification Problem [80.52431374743998]
We propose a novel perspective that attributes this vulnerability to reward misspecification during the alignment process.
We introduce a metric ReGap to quantify the extent of reward misspecification and demonstrate its effectiveness.
We present ReMiss, a system for automated red teaming that generates adversarial prompts in a reward-misspecified space.
arXiv Detail & Related papers (2024-06-20T15:12:27Z) - Are Large Language Models Good Prompt Optimizers? [65.48910201816223]
We conduct a study to uncover the actual mechanism of LLM-based Prompt Optimization.
Our findings reveal that the LLMs struggle to identify the true causes of errors during reflection, tending to be biased by their own prior knowledge.
We introduce a new "Automatic Behavior Optimization" paradigm, which directly optimize the target model's behavior in a more controllable manner.
arXiv Detail & Related papers (2024-02-03T09:48:54Z) - Code Polymorphism Meets Code Encryption: Confidentiality and Side-Channel Protection of Software Components [0.0]
PolEn is a toolchain and a processor architecturethat combine countermeasures in order to provide an effective mitigation of side-channel attacks.
Code encryption is supported by a processor extension such that machineinstructions are only decrypted inside the CPU.
Code polymorphism is implemented by software means. It regularly changes the observablebehaviour of the program, making it unpredictable for an attacker.
arXiv Detail & Related papers (2023-10-11T09:16:10Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Short Paper: Static and Microarchitectural ML-Based Approaches For
Detecting Spectre Vulnerabilities and Attacks [0.0]
Spectre intrusions exploit speculative execution design vulnerabilities in modern processors.
Current state-of-the-art detection techniques utilize micro-architectural features or vulnerable speculative code to detect these threats.
We present the first comprehensive evaluation of static and microarchitectural analysis-assisted machine learning approaches to detect Spectre vulnerabilities.
arXiv Detail & Related papers (2022-10-26T03:55:39Z) - Covert Model Poisoning Against Federated Learning: Algorithm Design and
Optimization [76.51980153902774]
Federated learning (FL) is vulnerable to external attacks on FL models during parameters transmissions.
In this paper, we propose effective MP algorithms to combat state-of-the-art defensive aggregation mechanisms.
Our experimental results demonstrate that the proposed CMP algorithms are effective and substantially outperform existing attack mechanisms.
arXiv Detail & Related papers (2021-01-28T03:28:18Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.