LightSLH: Provable and Low-Overhead Spectre v1 Mitigation through Targeted Instruction Hardening
- URL: http://arxiv.org/abs/2408.16220v1
- Date: Thu, 29 Aug 2024 02:31:28 GMT
- Title: LightSLH: Provable and Low-Overhead Spectre v1 Mitigation through Targeted Instruction Hardening
- Authors: Yiming Zhu, Wenchao Huang, Yan Xiong,
- Abstract summary: We propose LightSLH, designed to mitigate this overhead by hardening instructions only when they are under threat from Spectre vulnerabilities.
LightSLH leverages program analysis techniques based on abstract interpretation to identify all instructions that could potentially lead to Spectre vulnerabilities and provides provable protection.
We demonstrate the security guarantees of LightSLH and evaluate its performance on cryptographic algorithm implementations from OpenSSL.
- Score: 14.99532960317865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Several software mitigations have been proposed to defend against Spectre vulnerabilities. However, these countermeasures often suffer from high performance overhead, largely due to unnecessary protections. We propose LightSLH, designed to mitigate this overhead by hardening instructions only when they are under threat from Spectre vulnerabilities. LightSLH leverages program analysis techniques based on abstract interpretation to identify all instructions that could potentially lead to Spectre vulnerabilities and provides provable protection. To enhance analysis efficiency and precision, LightSLH employs novel taint and value domains. The taint domain enables bit-level taint tracking, while the value domain allows LightSLH to analyze complex program structures such as pointers and structures. Furthermore, LightSLH uses a two-stage abstract interpretation approach to circumvent potential analysis paralysis issues. We demonstrate the security guarantees of LightSLH and evaluate its performance on cryptographic algorithm implementations from OpenSSL. LightSLH significantly reduces the overhead associated with speculative-load-hardening techniques. Our results show that LightSLH introduces no protection and thus no overhead on 4 out of the 7 studied algorithms, which contrasts with existing countermeasures that introduce additional overhead due to unnecessary hardening. Additionally, LightSLH performs, for the first time, a rigorous analysis of the security guarantees of RSA against Spectre v1, highlighting that the memory access patterns generated by the scatter-gather algorithm depend on secrets, even for observers at the cache line granularity, necessitating protection for such accesses.
Related papers
- Iterative Self-Tuning LLMs for Enhanced Jailbreaking Capabilities [63.603861880022954]
We introduce ADV-LLM, an iterative self-tuning process that crafts adversarial LLMs with enhanced jailbreak ability.
Our framework significantly reduces the computational cost of generating adversarial suffixes while achieving nearly 100% ASR on various open-source LLMs.
It exhibits strong attack transferability to closed-source models, achieving 99% ASR on GPT-3.5 and 49% ASR on GPT-4, despite being optimized solely on Llama3.
arXiv Detail & Related papers (2024-10-24T06:36:12Z) - The Early Bird Catches the Leak: Unveiling Timing Side Channels in LLM Serving Systems [26.528288876732617]
A set of new timing side channels can be exploited to infer confidential system prompts and those issued by other users.
These vulnerabilities echo security challenges observed in traditional computing systems.
We propose a token-by-token search algorithm to efficiently recover shared prompt prefixes in the caches.
arXiv Detail & Related papers (2024-09-30T06:55:00Z) - Defensive Prompt Patch: A Robust and Interpretable Defense of LLMs against Jailbreak Attacks [59.46556573924901]
This paper introduces Defensive Prompt Patch (DPP), a novel prompt-based defense mechanism for large language models (LLMs)
Unlike previous approaches, DPP is designed to achieve a minimal Attack Success Rate (ASR) while preserving the high utility of LLMs.
Empirical results conducted on LLAMA-2-7B-Chat and Mistral-7B-Instruct-v0.2 models demonstrate the robustness and adaptability of DPP.
arXiv Detail & Related papers (2024-05-30T14:40:35Z) - EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding Inspection [53.25863925815954]
Federated self-supervised learning (FSSL) has emerged as a promising paradigm that enables the exploitation of clients' vast amounts of unlabeled data.
While FSSL offers advantages, its susceptibility to backdoor attacks has not been investigated.
We propose the Embedding Inspector (EmInspector) that detects malicious clients by inspecting the embedding space of local models.
arXiv Detail & Related papers (2024-05-21T06:14:49Z) - S-box Security Analysis of NIST Lightweight Cryptography Candidates: A Critical Empirical Study [0.2621434923709917]
NIST issued a call for standardization of Lightweight cryptography algorithms in 2018.
Ascon emerged as the winner of this competition.
We evaluate the S-boxes of six finalists in the NIST Lightweight Cryptography (LWC) standardization process.
arXiv Detail & Related papers (2024-04-09T07:56:52Z) - ASETF: A Novel Method for Jailbreak Attack on LLMs through Translate Suffix Embeddings [58.82536530615557]
We propose an Adversarial Suffix Embedding Translation Framework (ASETF) to transform continuous adversarial suffix embeddings into coherent and understandable text.
Our method significantly reduces the computation time of adversarial suffixes and achieves a much better attack success rate to existing techniques.
arXiv Detail & Related papers (2024-02-25T06:46:27Z) - Beyond Over-Protection: A Targeted Approach to Spectre Mitigation and Performance Optimization [3.4439829486606737]
Speculative load hardening in LLVM protects against leaks by tracking the speculation state and masking values during misspeculation.
We extend an existing side-channel model validation framework, Scam-V, to check the vulnerability of programs to Spectre-PHT attacks and optimize the protection of programs using the slh approach.
arXiv Detail & Related papers (2023-12-15T13:16:50Z) - Token-Level Adversarial Prompt Detection Based on Perplexity Measures
and Contextual Information [67.78183175605761]
Large Language Models are susceptible to adversarial prompt attacks.
This vulnerability underscores a significant concern regarding the robustness and reliability of LLMs.
We introduce a novel approach to detecting adversarial prompts at a token level.
arXiv Detail & Related papers (2023-11-20T03:17:21Z) - ZeroLeak: Using LLMs for Scalable and Cost Effective Side-Channel
Patching [6.556868623811133]
Security critical software, e.g., OpenSSL, comes with numerous side-channel leakages left unpatched due to a lack of resources or experts.
We explore the use of Large Language Models (LLMs) in generating patches for vulnerable code with microarchitectural side-channel leakages.
arXiv Detail & Related papers (2023-08-24T20:04:36Z) - Short Paper: Static and Microarchitectural ML-Based Approaches For
Detecting Spectre Vulnerabilities and Attacks [0.0]
Spectre intrusions exploit speculative execution design vulnerabilities in modern processors.
Current state-of-the-art detection techniques utilize micro-architectural features or vulnerable speculative code to detect these threats.
We present the first comprehensive evaluation of static and microarchitectural analysis-assisted machine learning approaches to detect Spectre vulnerabilities.
arXiv Detail & Related papers (2022-10-26T03:55:39Z) - Scalable Representation Learning in Linear Contextual Bandits with
Constant Regret Guarantees [103.69464492445779]
We propose BanditSRL, a representation learning algorithm that learns a realizable representation with good spectral properties.
We prove that BanditSRL can be paired with any no-regret algorithm and achieve constant regret whenever an HLS representation is available.
arXiv Detail & Related papers (2022-10-24T10:04:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.