DECLASSIFLOW: A Static Analysis for Modeling Non-Speculative Knowledge to Relax Speculative Execution Security Measures (Full Version)
- URL: http://arxiv.org/abs/2312.09336v1
- Date: Thu, 14 Dec 2023 21:00:20 GMT
- Title: DECLASSIFLOW: A Static Analysis for Modeling Non-Speculative Knowledge to Relax Speculative Execution Security Measures (Full Version)
- Authors: Rutvik Choudhary, Alan Wang, Zirui Neil Zhao, Adam Morrison, Christopher W. Fletcher,
- Abstract summary: Speculative execution attacks undermine the security of constant-time programming.
This paper proposes DECLASSIFLOW to efficiently protect constant-time code from speculative leakage.
- Score: 9.816078445230305
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Speculative execution attacks undermine the security of constant-time programming, the standard technique used to prevent microarchitectural side channels in security-sensitive software such as cryptographic code. Constant-time code must therefore also deploy a defense against speculative execution attacks to prevent leakage of secret data stored in memory or the processor registers. Unfortunately, contemporary defenses, such as speculative load hardening (SLH), can only satisfy this strong security guarantee at a very high performance cost. This paper proposes DECLASSIFLOW, a static program analysis and protection framework to efficiently protect constant-time code from speculative leakage. DECLASSIFLOW models "attacker knowledge" -- data which is inherently transmitted (or, implicitly declassified) by the code's non-speculative execution -- and statically removes protection on such data from points in the program where it is already guaranteed to leak non-speculatively. Overall, DECLASSIFLOW ensures that data which never leaks during the non-speculative execution does not leak during speculative execution, but with lower overhead than conservative protections like SLH.
Related papers
- A Scheduling-Aware Defense Against Prefetching-Based Side-Channel Attacks [16.896693436047137]
Speculative loading of memory, called prefetching, is common in real-world CPUs.
Prefetching can be exploited to bypass process isolation and leak secrets, such as keys used in RSA, AES, and ECDH implementations.
We implement our countermeasure for an x86_64 and an ARM processor.
arXiv Detail & Related papers (2024-10-01T07:12:23Z) - Nothing in Excess: Mitigating the Exaggerated Safety for LLMs via Safety-Conscious Activation Steering [56.92068213969036]
Safety alignment is indispensable for Large language models (LLMs) to defend threats from malicious instructions.
Recent researches reveal safety-aligned LLMs prone to reject benign queries due to the exaggerated safety issue.
We propose a Safety-Conscious Activation Steering (SCANS) method to mitigate the exaggerated safety concerns.
arXiv Detail & Related papers (2024-08-21T10:01:34Z) - VeriCHERI: Exhaustive Formal Security Verification of CHERI at the RTL [4.652188875442064]
CHERI provides and enforces fine-grained memory protection directly in the hardware.
VeriCHERI is conceptionally different from previous works in that it does not require any ISA specification.
We demonstrate the effectiveness and scalability of VeriCHERI on a RISC-V based processor implementing a CHERI variant.
arXiv Detail & Related papers (2024-07-26T11:48:55Z) - BEEAR: Embedding-based Adversarial Removal of Safety Backdoors in Instruction-tuned Language Models [57.5404308854535]
Safety backdoor attacks in large language models (LLMs) enable the stealthy triggering of unsafe behaviors while evading detection during normal interactions.
We present BEEAR, a mitigation approach leveraging the insight that backdoor triggers induce relatively uniform drifts in the model's embedding space.
Our bi-level optimization method identifies universal embedding perturbations that elicit unwanted behaviors and adjusts the model parameters to reinforce safe behaviors against these perturbations.
arXiv Detail & Related papers (2024-06-24T19:29:47Z) - Defending Large Language Models against Jailbreak Attacks via Semantic
Smoothing [107.97160023681184]
Aligned large language models (LLMs) are vulnerable to jailbreaking attacks.
We propose SEMANTICSMOOTH, a smoothing-based defense that aggregates predictions of semantically transformed copies of a given input prompt.
arXiv Detail & Related papers (2024-02-25T20:36:03Z) - Mitigating Fine-tuning based Jailbreak Attack with Backdoor Enhanced Safety Alignment [56.2017039028998]
Fine-tuning of Language-Model-as-a-Service (LM) introduces new threats, particularly against the Fine-tuning based Jailbreak Attack (FJAttack)
We propose the Backdoor Enhanced Safety Alignment method inspired by an analogy with the concept of backdoor attacks.
Our comprehensive experiments demonstrate that through the Backdoor Enhanced Safety Alignment with adding as few as 11 safety examples, the maliciously finetuned LLMs will achieve similar safety performance as the original aligned models without harming the benign performance.
arXiv Detail & Related papers (2024-02-22T21:05:18Z) - Towards Efficient Verification of Constant-Time Cryptographic
Implementations [5.433710892250037]
Constant-time programming discipline is an effective software-based countermeasure against timing side-channel attacks.
We put forward practical verification approaches based on a novel synergy of taint analysis and safety verification of self-composed programs.
Our approach is implemented as a cross-platform and fully automated tool CT-Prover.
arXiv Detail & Related papers (2024-02-21T03:39:14Z) - Code Polymorphism Meets Code Encryption: Confidentiality and Side-Channel Protection of Software Components [0.0]
PolEn is a toolchain and a processor architecturethat combine countermeasures in order to provide an effective mitigation of side-channel attacks.
Code encryption is supported by a processor extension such that machineinstructions are only decrypted inside the CPU.
Code polymorphism is implemented by software means. It regularly changes the observablebehaviour of the program, making it unpredictable for an attacker.
arXiv Detail & Related papers (2023-10-11T09:16:10Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Securing Optimized Code Against Power Side Channels [1.589424114251205]
Security engineers often sacrifice code efficiency by turning off compiler optimization and/or performing local, post-compilation transformations.
This paper proposes SecConCG, a constraint-based compiler approach that generates optimized yet secure code.
arXiv Detail & Related papers (2022-07-06T12:06:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.