FSLH: Flexible Mechanized Speculative Load Hardening
- URL: http://arxiv.org/abs/2502.03203v3
- Date: Thu, 13 Feb 2025 16:28:06 GMT
- Title: FSLH: Flexible Mechanized Speculative Load Hardening
- Authors: Roberto Blanco, Léon Ducruet, Sebastian Harwig, Catalin Hritcu,
- Abstract summary: Spectre speculative side-channel attacks pose formidable threats for computer system security.<n>Research has shown that cryptographic constant-time code can be efficiently protected against Spectre v1 using a selective variant of Speculative Load Hardening.<n>We introduce a flexible SLH notion that achieves the best of both worlds by formally generalizing both Selective and Ultimate SLH.
- Score: 0.14999444543328289
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Spectre speculative side-channel attacks pose formidable threats for computer system security. Research has shown that cryptographic constant-time code can be efficiently protected against Spectre v1 using a selective variant of Speculative Load Hardening (SLH). SLH was, however, not strong enough for protecting non-cryptographic code, leading to the introduction of Ultimate SLH, which provides protection for arbitrary programs, but has too large overhead for general use, since it conservatively assumes that all data is secret. In this paper we introduce a flexible SLH notion that achieves the best of both worlds by formally generalizing both Selective and Ultimate SLH. We give a suitable security definition for such transformations protecting arbitrary programs: any transformed program running with speculation should not leak more than what the source program leaks sequentially. We formally prove using the Rocq prover that two flexible SLH variants enforce this relative security guarantee. As easy corollaries we also obtain that Ultimate SLH enforces our relative security notion, and also that the selective variants of value SLH and address SLH enforce speculative constant-time security.
Related papers
- Improving LLM Safety Alignment with Dual-Objective Optimization [65.41451412400609]
Existing training-time safety alignment techniques for large language models (LLMs) remain vulnerable to jailbreak attacks.
We propose an improved safety alignment that disentangles DPO objectives into two components: (1) robust refusal training, which encourages refusal even when partial unsafe generations are produced, and (2) targeted unlearning of harmful knowledge.
arXiv Detail & Related papers (2025-03-05T18:01:05Z) - Towards Copyright Protection for Knowledge Bases of Retrieval-augmented Language Models via Ownership Verification with Reasoning [58.57194301645823]
Large language models (LLMs) are increasingly integrated into real-world applications through retrieval-augmented generation (RAG) mechanisms.
Existing methods that can be generalized as watermarking techniques to protect these knowledge bases typically involve poisoning attacks.
We propose name for harmless' copyright protection of knowledge bases.
arXiv Detail & Related papers (2025-02-10T09:15:56Z) - Boosting Jailbreak Transferability for Large Language Models [10.884050438726215]
We propose a scenario induction template, optimized suffix selection, and the integration of re-suffix attack mechanism to reduce inconsistent outputs.
Our approach has shown superior performance in extensive experiments across various benchmarks, achieving nearly 100% success rates in both attack execution and transferability.
arXiv Detail & Related papers (2024-10-21T05:11:19Z) - Generalized Quantum-assisted Digital Signature [2.187441808562386]
This paper introduces an improved version of a recently proposed scheme whose information theoretic security is inherited by adopting QKD keys for digital signature purposes.
Its security against forging is computed considering a trial-and-error approach taken by the malicious forger and GQaDS parameters are optimized via an analytical approach balancing between forgery and repudiation probabilities.
arXiv Detail & Related papers (2024-06-28T15:04:38Z) - Towards Comprehensive Post Safety Alignment of Large Language Models via Safety Patching [74.62818936088065]
textscSafePatching is a novel framework for comprehensive PSA.<n>textscSafePatching achieves a more comprehensive PSA than baseline methods.<n>textscSafePatching demonstrates its superiority in continual PSA scenarios.
arXiv Detail & Related papers (2024-05-22T16:51:07Z) - CodeChameleon: Personalized Encryption Framework for Jailbreaking Large
Language Models [49.60006012946767]
We propose CodeChameleon, a novel jailbreak framework based on personalized encryption tactics.
We conduct extensive experiments on 7 Large Language Models, achieving state-of-the-art average Attack Success Rate (ASR)
Remarkably, our method achieves an 86.6% ASR on GPT-4-1106.
arXiv Detail & Related papers (2024-02-26T16:35:59Z) - Coding-Based Hybrid Post-Quantum Cryptosystem for Non-Uniform Information [53.85237314348328]
We introduce for non-uniform messages a novel hybrid universal network coding cryptosystem (NU-HUNCC)
We show that NU-HUNCC is information-theoretic individually secured against an eavesdropper with access to any subset of the links.
arXiv Detail & Related papers (2024-02-13T12:12:39Z) - Secure Synthesis of Distributed Cryptographic Applications (Technical Report) [1.9707603524984119]
We advocate using secure program partitioning to synthesize cryptographic applications.
This approach is promising, but formal results for the security of such compilers are limited in scope.
We develop a compiler security proof that handles subtleties essential for robust, efficient applications.
arXiv Detail & Related papers (2024-01-06T02:57:44Z) - DECLASSIFLOW: A Static Analysis for Modeling Non-Speculative Knowledge to Relax Speculative Execution Security Measures (Full Version) [9.816078445230305]
Speculative execution attacks undermine the security of constant-time programming.
This paper proposes DECLASSIFLOW to efficiently protect constant-time code from speculative leakage.
arXiv Detail & Related papers (2023-12-14T21:00:20Z) - Certifying LLM Safety against Adversarial Prompting [70.96868018621167]
Large language models (LLMs) are vulnerable to adversarial attacks that add malicious tokens to an input prompt.<n>We introduce erase-and-check, the first framework for defending against adversarial prompts with certifiable safety guarantees.
arXiv Detail & Related papers (2023-09-06T04:37:20Z) - Publicly-Verifiable Deletion via Target-Collapsing Functions [81.13800728941818]
We show that targetcollapsing enables publiclyverifiable deletion (PVD)
We build on this framework to obtain a variety of primitives supporting publiclyverifiable deletion from weak cryptographic assumptions.
arXiv Detail & Related papers (2023-03-15T15:00:20Z) - Uncloneable Decryptors from Quantum Copy-Protection [0.38073142980733]
We show that CPA secure uncloneable bit decryptors could be instantiated from a copy protection scheme.
We then show how to strengthen the CPA security of uncloneable decryptors to CCA2 security using strong EUF-CMA secure digital signatures.
arXiv Detail & Related papers (2022-03-11T11:47:04Z) - Quantum copy-protection of compute-and-compare programs in the quantum random oracle model [48.94443749859216]
We introduce a quantum copy-protection scheme for a class of evasive functions known as " compute-and-compare programs"
We prove that our scheme achieves non-trivial security against fully malicious adversaries in the quantum random oracle model (QROM)
As a complementary result, we show that the same scheme fulfils a weaker notion of software protection, called "secure software leasing"
arXiv Detail & Related papers (2020-09-29T08:41:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.