SMaCk: Efficient Instruction Cache Attacks via Self-Modifying Code Conflicts
- URL: http://arxiv.org/abs/2502.05429v1
- Date: Sat, 08 Feb 2025 03:35:55 GMT
- Title: SMaCk: Efficient Instruction Cache Attacks via Self-Modifying Code Conflicts
- Authors: Seonghun Son, Daniel Moghimi, Berk Gulmezoglu,
- Abstract summary: Self-modifying code (SMC) allows programs to alter their own instructions.
SMC introduces unique microarchitectural behaviors that can be exploited for malicious purposes.
- Score: 5.942801930997087
- License:
- Abstract: Self-modifying code (SMC) allows programs to alter their own instructions, optimizing performance and functionality on x86 processors. Despite its benefits, SMC introduces unique microarchitectural behaviors that can be exploited for malicious purposes. In this paper, we explore the security implications of SMC by examining how specific x86 instructions affecting instruction cache lines lead to measurable timing discrepancies between cache hits and misses. These discrepancies facilitate refined cache attacks, making them less noisy and more effective. We introduce novel attack techniques that leverage these timing variations to enhance existing methods such as Prime+Probe and Flush+Reload. Our advanced techniques allow adversaries to more precisely attack cryptographic keys and create covert channels akin to Spectre across various x86 platforms. Finally, we propose a dynamic detection methodology utilizing hardware performance counters to mitigate these enhanced threats.
Related papers
- μRL: Discovering Transient Execution Vulnerabilities Using Reinforcement Learning [4.938372714332782]
We propose using reinforcement learning to address the challenges of discovering microarchitectural vulnerabilities, such as Spectre and Meltdown.
Our RL agents interact with the processor, learning from real-time feedback to prioritize instruction sequences more likely to reveal vulnerabilities.
arXiv Detail & Related papers (2025-02-20T06:42:03Z) - Deliberation in Latent Space via Differentiable Cache Augmentation [48.228222586655484]
We show that a frozen large language model can be augmented with an offline coprocessor that operates on the model's key-value (kv) cache.
This coprocessor augments the cache with a set of latent embeddings designed to improve the fidelity of subsequent decoding.
We show experimentally that when a cache is augmented, the decoder achieves lower perplexity on numerous subsequent tokens.
arXiv Detail & Related papers (2024-12-23T18:02:25Z) - BETA: Automated Black-box Exploration for Timing Attacks in Processors [6.02100696004881]
We present BETA, a novel black-box framework that harnesses fuzzing to efficiently uncover multifaceted timing vulnerabilities in processors.
We evaluate the performance and effectiveness of BETA on four processors from Intel and AMD, each featuring distinct microarchitectures.
arXiv Detail & Related papers (2024-10-22T02:48:19Z) - Efficient Inference of Vision Instruction-Following Models with Elastic Cache [76.44955111634545]
We introduce Elastic Cache, a novel strategy for efficient deployment of instruction-following large vision-language models.
We propose an importance-driven cache merging strategy to prune redundancy caches.
For instruction encoding, we utilize the frequency to evaluate the importance of caches.
Results on a range of LVLMs demonstrate that Elastic Cache not only boosts efficiency but also notably outperforms existing pruning methods in language generation.
arXiv Detail & Related papers (2024-07-25T15:29:05Z) - Cancellable Memory Requests: A transparent, lightweight Spectre mitigation [11.499924192220274]
Speculation is fundamental to achieving high CPU performance, yet it enables vulnerabilities such as Spectre attacks.
We propose a novel mitigation technique, Cancellable Memory Requests (CMR) that cancels mis-speculated memory requests.
We show that CMR can completely thwart Spectre attacks in four real-world processors with realistic system configurations.
arXiv Detail & Related papers (2024-06-17T21:43:39Z) - A New Formulation for Zeroth-Order Optimization of Adversarial EXEmples in Malware Detection [14.786557372850094]
We show how learning malware detectors can be cast within a zeroth-order optimization framework.
We propose and study ZEXE, a novel zero-order attack against Windows malware detection.
arXiv Detail & Related papers (2024-05-23T13:01:36Z) - Towards Robust Semantic Segmentation against Patch-based Attack via Attention Refinement [68.31147013783387]
We observe that the attention mechanism is vulnerable to patch-based adversarial attacks.
In this paper, we propose a Robust Attention Mechanism (RAM) to improve the robustness of the semantic segmentation model.
arXiv Detail & Related papers (2024-01-03T13:58:35Z) - Code Polymorphism Meets Code Encryption: Confidentiality and Side-Channel Protection of Software Components [0.0]
PolEn is a toolchain and a processor architecturethat combine countermeasures in order to provide an effective mitigation of side-channel attacks.
Code encryption is supported by a processor extension such that machineinstructions are only decrypted inside the CPU.
Code polymorphism is implemented by software means. It regularly changes the observablebehaviour of the program, making it unpredictable for an attacker.
arXiv Detail & Related papers (2023-10-11T09:16:10Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.