Place Protections at the Right Place: Targeted Hardening for Cryptographic Code against Spectre v1
- URL: http://arxiv.org/abs/2408.16220v2
- Date: Sun, 08 Jun 2025 09:30:12 GMT
- Title: Place Protections at the Right Place: Targeted Hardening for Cryptographic Code against Spectre v1
- Authors: Yiming Zhu, Wenchao Huang, Yan Xiong,
- Abstract summary: We propose an analysis framework that employs a novel fixpoint algorithm to detect Spectre vulnerabilities and apply targeted hardening.<n>We instantiate the framework as LightSLH, which hardens program with provable security.
- Score: 14.99532960317865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spectre v1 attacks pose a substantial threat to security-critical software, particularly cryptographic implementations. Existing software mitigations, however, often introduce excessive overhead by indiscriminately hardening instructions without assessing their vulnerability. We propose an analysis framework that employs a novel fixpoint algorithm to detect Spectre vulnerabilities and apply targeted hardening. The fixpoint algorithm accounts for program behavior changes induced by stepwise hardening, enabling precise, sound and efficient vulnerability detection. This framework also provides flexibility for diverse hardening strategies and attacker models, enabling customized targeted hardening. We instantiate the framework as LightSLH, which hardens program with provable security. We evaluate LightSLH on cryptographic algorithms from OpenSSL, Libsodium, NaCL and PQClean. Across all experimental cases, LightSLH provides the lowest overhead among current provable protection strategies, including 0\% overhead in 50\% cases. Notably, the analysis of LightSLH reveals two previously unknown security issues: (1) The compiler can introduce risks overlooked by LLSCT, a hardening method proven secure at the LLVM IR level. We successfully construct a side channel by exploiting compiler-inserted stack loads, confirming this risk. (2) Memory access patterns generated by the scatter-gather algorithm still depend on secrets, even for observers with cache line granularity. These findings and results highlight the importance of applying accurate protections to specific instructions.
Related papers
- Robust Anti-Backdoor Instruction Tuning in LVLMs [53.766434746801366]
We introduce a lightweight, certified-agnostic defense framework for large visual language models (LVLMs)<n>Our framework finetunes only adapter modules and text embedding layers under instruction tuning.<n>Experiments against seven attacks on Flickr30k and MSCOCO demonstrate that ours reduces their attack success rate to nearly zero.
arXiv Detail & Related papers (2025-06-04T01:23:35Z) - Defending against Indirect Prompt Injection by Instruction Detection [81.98614607987793]
We propose a novel approach that takes external data as input and leverages the behavioral state of LLMs during both forward and backward propagation to detect potential IPI attacks.<n>Our approach achieves a detection accuracy of 99.60% in the in-domain setting and 96.90% in the out-of-domain setting, while reducing the attack success rate to just 0.12% on the BIPIA benchmark.
arXiv Detail & Related papers (2025-05-08T13:04:45Z) - Context-Enhanced Vulnerability Detection Based on Large Language Model [17.922081397554155]
We propose a context-enhanced vulnerability detection approach that combines program analysis with large language models.
Specifically, we use program analysis to extract contextual information at various levels of abstraction, thereby filtering out irrelevant noise.
Our goal is to strike a balance between providing sufficient detail to accurately capture vulnerabilities and minimizing unnecessary complexity.
arXiv Detail & Related papers (2025-04-23T16:54:16Z) - LightDefense: A Lightweight Uncertainty-Driven Defense against Jailbreaks via Shifted Token Distribution [84.2846064139183]
Large Language Models (LLMs) face threats from jailbreak prompts.
We propose LightDefense, a lightweight defense mechanism targeted at white-box models.
arXiv Detail & Related papers (2025-04-02T09:21:26Z) - Exposing the Ghost in the Transformer: Abnormal Detection for Large Language Models via Hidden State Forensics [5.384257830522198]
Large Language Models (LLMs) in critical applications have introduced severe reliability and security risks.<n>These vulnerabilities have been weaponized by malicious actors, leading to unauthorized access, widespread misinformation, and compromised system integrity.<n>We introduce a novel approach to detecting abnormal behaviors in LLMs via hidden state forensics.
arXiv Detail & Related papers (2025-04-01T05:58:14Z) - Improving LLM Safety Alignment with Dual-Objective Optimization [65.41451412400609]
Existing training-time safety alignment techniques for large language models (LLMs) remain vulnerable to jailbreak attacks.
We propose an improved safety alignment that disentangles DPO objectives into two components: (1) robust refusal training, which encourages refusal even when partial unsafe generations are produced, and (2) targeted unlearning of harmful knowledge.
arXiv Detail & Related papers (2025-03-05T18:01:05Z) - Towards Copyright Protection for Knowledge Bases of Retrieval-augmented Language Models via Reasoning [58.57194301645823]
Large language models (LLMs) are increasingly integrated into real-world personalized applications.<n>The valuable and often proprietary nature of the knowledge bases used in RAG introduces the risk of unauthorized usage by adversaries.<n>Existing methods that can be generalized as watermarking techniques to protect these knowledge bases typically involve poisoning or backdoor attacks.<n>We propose name for harmless' copyright protection of knowledge bases.
arXiv Detail & Related papers (2025-02-10T09:15:56Z) - Token Highlighter: Inspecting and Mitigating Jailbreak Prompts for Large Language Models [61.916827858666906]
Large Language Models (LLMs) are increasingly being integrated into services such as ChatGPT to provide responses to user queries.
This paper proposes a method called Token Highlighter to inspect and mitigate the potential jailbreak threats in the user query.
arXiv Detail & Related papers (2024-12-24T05:10:02Z) - LProtector: An LLM-driven Vulnerability Detection System [3.175156999656286]
LProtector is an automated vulnerability detection system for C/C++s driven by the large language model (LLM) GPT-4o and Retrieval-Augmented Generation (RAG)
arXiv Detail & Related papers (2024-11-10T15:21:30Z) - Iterative Self-Tuning LLMs for Enhanced Jailbreaking Capabilities [63.603861880022954]
We introduce ADV-LLM, an iterative self-tuning process that crafts adversarial LLMs with enhanced jailbreak ability.
Our framework significantly reduces the computational cost of generating adversarial suffixes while achieving nearly 100% ASR on various open-source LLMs.
It exhibits strong attack transferability to closed-source models, achieving 99% ASR on GPT-3.5 and 49% ASR on GPT-4, despite being optimized solely on Llama3.
arXiv Detail & Related papers (2024-10-24T06:36:12Z) - The Early Bird Catches the Leak: Unveiling Timing Side Channels in LLM Serving Systems [26.528288876732617]
A set of new timing side channels can be exploited to infer confidential system prompts and those issued by other users.
These vulnerabilities echo security challenges observed in traditional computing systems.
We propose a token-by-token search algorithm to efficiently recover shared prompt prefixes in the caches.
arXiv Detail & Related papers (2024-09-30T06:55:00Z) - Defensive Prompt Patch: A Robust and Interpretable Defense of LLMs against Jailbreak Attacks [59.46556573924901]
This paper introduces Defensive Prompt Patch (DPP), a novel prompt-based defense mechanism for large language models (LLMs)
Unlike previous approaches, DPP is designed to achieve a minimal Attack Success Rate (ASR) while preserving the high utility of LLMs.
Empirical results conducted on LLAMA-2-7B-Chat and Mistral-7B-Instruct-v0.2 models demonstrate the robustness and adaptability of DPP.
arXiv Detail & Related papers (2024-05-30T14:40:35Z) - LLM-Assisted Static Analysis for Detecting Security Vulnerabilities [14.188864624736938]
Large language models (or LLMs) have shown impressive code generation capabilities but they cannot do complex reasoning over code to detect such vulnerabilities.
We propose IRIS, a neuro-symbolic approach that systematically combines LLMs with static analysis to perform whole-repository reasoning for security vulnerability detection.
arXiv Detail & Related papers (2024-05-27T14:53:35Z) - EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding Inspection [53.25863925815954]
Federated self-supervised learning (FSSL) has emerged as a promising paradigm that enables the exploitation of clients' vast amounts of unlabeled data.
While FSSL offers advantages, its susceptibility to backdoor attacks has not been investigated.
We propose the Embedding Inspector (EmInspector) that detects malicious clients by inspecting the embedding space of local models.
arXiv Detail & Related papers (2024-05-21T06:14:49Z) - S-box Security Analysis of NIST Lightweight Cryptography Candidates: A Critical Empirical Study [0.2621434923709917]
NIST issued a call for standardization of Lightweight cryptography algorithms in 2018.
Ascon emerged as the winner of this competition.
We evaluate the S-boxes of six finalists in the NIST Lightweight Cryptography (LWC) standardization process.
arXiv Detail & Related papers (2024-04-09T07:56:52Z) - ASETF: A Novel Method for Jailbreak Attack on LLMs through Translate Suffix Embeddings [58.82536530615557]
We propose an Adversarial Suffix Embedding Translation Framework (ASETF) to transform continuous adversarial suffix embeddings into coherent and understandable text.
Our method significantly reduces the computation time of adversarial suffixes and achieves a much better attack success rate to existing techniques.
arXiv Detail & Related papers (2024-02-25T06:46:27Z) - Benchmarking and Defending Against Indirect Prompt Injection Attacks on Large Language Models [79.0183835295533]
We introduce the first benchmark for indirect prompt injection attacks, named BIPIA, to assess the risk of such vulnerabilities.
Our analysis identifies two key factors contributing to their success: LLMs' inability to distinguish between informational context and actionable instructions, and their lack of awareness in avoiding the execution of instructions within external content.
We propose two novel defense mechanisms-boundary awareness and explicit reminder-to address these vulnerabilities in both black-box and white-box settings.
arXiv Detail & Related papers (2023-12-21T01:08:39Z) - Beyond Over-Protection: A Targeted Approach to Spectre Mitigation and Performance Optimization [3.4439829486606737]
Speculative load hardening in LLVM protects against leaks by tracking the speculation state and masking values during misspeculation.
We extend an existing side-channel model validation framework, Scam-V, to check the vulnerability of programs to Spectre-PHT attacks and optimize the protection of programs using the slh approach.
arXiv Detail & Related papers (2023-12-15T13:16:50Z) - Token-Level Adversarial Prompt Detection Based on Perplexity Measures
and Contextual Information [67.78183175605761]
Large Language Models are susceptible to adversarial prompt attacks.
This vulnerability underscores a significant concern regarding the robustness and reliability of LLMs.
We introduce a novel approach to detecting adversarial prompts at a token level.
arXiv Detail & Related papers (2023-11-20T03:17:21Z) - Short Paper: Static and Microarchitectural ML-Based Approaches For
Detecting Spectre Vulnerabilities and Attacks [0.0]
Spectre intrusions exploit speculative execution design vulnerabilities in modern processors.
Current state-of-the-art detection techniques utilize micro-architectural features or vulnerable speculative code to detect these threats.
We present the first comprehensive evaluation of static and microarchitectural analysis-assisted machine learning approaches to detect Spectre vulnerabilities.
arXiv Detail & Related papers (2022-10-26T03:55:39Z) - Scalable Representation Learning in Linear Contextual Bandits with
Constant Regret Guarantees [103.69464492445779]
We propose BanditSRL, a representation learning algorithm that learns a realizable representation with good spectral properties.
We prove that BanditSRL can be paired with any no-regret algorithm and achieve constant regret whenever an HLS representation is available.
arXiv Detail & Related papers (2022-10-24T10:04:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.