ShuffleV: A Microarchitectural Defense Strategy against Electromagnetic Side-Channel Attacks in Microprocessors
- URL: http://arxiv.org/abs/2510.13111v1
- Date: Wed, 15 Oct 2025 03:09:42 GMT
- Title: ShuffleV: A Microarchitectural Defense Strategy against Electromagnetic Side-Channel Attacks in Microprocessors
- Authors: Nuntipat Narkthong, Yukui Luo, Xiaolin Xu,
- Abstract summary: This paper proposes ShuffleV, a microarchitecture defense strategy against EM Side-Channel Attacks (SCAs)<n>We build ShuffleV on the open-source RISC-V core and provide six design options to suit different application scenarios.<n>We implement ShuffleV on a Xilinx PYNQ-Z2 FPGA and validate its performance with two representative victim applications.
- Score: 10.917124131700312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The run-time electromagnetic (EM) emanation of microprocessors presents a side-channel that leaks the confidentiality of the applications running on them. Many recent works have demonstrated successful attacks leveraging such side-channels to extract the confidentiality of diverse applications, such as the key of cryptographic algorithms and the hyperparameter of neural network models. This paper proposes ShuffleV, a microarchitecture defense strategy against EM Side-Channel Attacks (SCAs). ShuffleV adopts the moving target defense (MTD) philosophy, by integrating hardware units to randomly shuffle the execution order of program instructions and optionally insert dummy instructions, to nullify the statistical observation by attackers across repetitive runs. We build ShuffleV on the open-source RISC-V core and provide six design options, to suit different application scenarios. To enable rapid evaluation, we develop a ShuffleV simulator that can help users to (1) simulate the performance overhead for each design option and (2) generate an execution trace to validate the randomness of execution on their workload. We implement ShuffleV on a Xilinx PYNQ-Z2 FPGA and validate its performance with two representative victim applications against EM SCAs, AES encryption, and neural network inference. The experimental results demonstrate that ShuffleV can provide automatic protection for these applications, without any user intervention or software modification.
Related papers
- RedVisor: Reasoning-Aware Prompt Injection Defense via Zero-Copy KV Cache Reuse [47.85771791033142]
We propose RedVisor, a framework that synthesizes the explainability of detection systems with the seamless integration of prevention strategies.<n>RedVisor is the first approach to leverage fine-grained reasoning paths to simultaneously detect attacks and guide the model's safe response.<n> Experiments demonstrate that RedVisor outperforms state-of-the-art defenses in detection accuracy and throughput while incurring negligible utility loss.
arXiv Detail & Related papers (2026-02-02T08:26:51Z) - CaMeLs Can Use Computers Too: System-level Security for Computer Use Agents [60.98294016925157]
AI agents are vulnerable to prompt injection attacks, where malicious content hijacks agent behavior to steal credentials or cause financial loss.<n>We introduce Single-Shot Planning for CUAs, where a trusted planner generates a complete execution graph with conditional branches before any observation of potentially malicious content.<n>Although this architectural isolation successfully prevents instruction injections, we show that additional measures are needed to prevent Branch Steering attacks.
arXiv Detail & Related papers (2026-01-14T23:06:35Z) - PermuteV: A Performant Side-channel-Resistant RISC-V Core Securing Edge AI Inference [8.089262335514297]
We propose PermuteV, a performant side-channel resistant RISC-V core designed to secure neural network inference.<n>PermuteV employs a hardware-accelerated defense mechanism that randomly permutes the execution order of loop iterations.<n>We implement PermuteV on FPGA and perform evaluations in terms of side-channel security, hardware area, and runtime overhead.
arXiv Detail & Related papers (2025-12-19T23:31:16Z) - Unveiling ECC Vulnerabilities: LSTM Networks for Operation Recognition in Side-Channel Attacks [6.373405051241682]
We propose a novel approach for performing side-channel attacks on elliptic curve cryptography.<n>We adopt a long-short-term memory (LSTM) neural network to analyze a power trace and identify patterns of operation.<n>We show that current countermeasures, specifically the coordinate randomization technique, are not sufficient to protect against side channels.
arXiv Detail & Related papers (2025-02-24T17:02:40Z) - One-Step Diffusion Policy: Fast Visuomotor Policies via Diffusion Distillation [80.71541671907426]
OneStep Diffusion Policy (OneDP) is a novel approach that distills knowledge from pre-trained diffusion policies into a single-step action generator.
OneDP significantly accelerates response times for robotic control tasks.
arXiv Detail & Related papers (2024-10-28T17:54:31Z) - Beyond Over-Protection: A Targeted Approach to Spectre Mitigation and Performance Optimization [3.4439829486606737]
Speculative load hardening in LLVM protects against leaks by tracking the speculation state and masking values during misspeculation.
We extend an existing side-channel model validation framework, Scam-V, to check the vulnerability of programs to Spectre-PHT attacks and optimize the protection of programs using the slh approach.
arXiv Detail & Related papers (2023-12-15T13:16:50Z) - Code Polymorphism Meets Code Encryption: Confidentiality and Side-Channel Protection of Software Components [0.0]
PolEn is a toolchain and a processor architecturethat combine countermeasures in order to provide an effective mitigation of side-channel attacks.
Code encryption is supported by a processor extension such that machineinstructions are only decrypted inside the CPU.
Code polymorphism is implemented by software means. It regularly changes the observablebehaviour of the program, making it unpredictable for an attacker.
arXiv Detail & Related papers (2023-10-11T09:16:10Z) - Citadel: Simple Spectre-Safe Isolation For Real-World Programs That Share Memory [8.414722884952525]
We introduce a new security property we call relaxed microarchitectural isolation (RMI)<n>RMI allows sensitive programs that are not-constant-time to share memory with an attacker while restricting the information leakage to that of non-speculative execution.<n>Our end-to-end prototype, Citadel, consists of an FPGA-based multicore processor that boots Linux and runs secure applications.
arXiv Detail & Related papers (2023-06-26T17:51:23Z) - Energy-efficient Task Adaptation for NLP Edge Inference Leveraging
Heterogeneous Memory Architectures [68.91874045918112]
adapter-ALBERT is an efficient model optimization for maximal data reuse across different tasks.
We demonstrate the advantage of mapping the model to a heterogeneous on-chip memory architecture by performing simulations on a validated NLP edge accelerator.
arXiv Detail & Related papers (2023-03-25T14:40:59Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - Covert Model Poisoning Against Federated Learning: Algorithm Design and
Optimization [76.51980153902774]
Federated learning (FL) is vulnerable to external attacks on FL models during parameters transmissions.
In this paper, we propose effective MP algorithms to combat state-of-the-art defensive aggregation mechanisms.
Our experimental results demonstrate that the proposed CMP algorithms are effective and substantially outperform existing attack mechanisms.
arXiv Detail & Related papers (2021-01-28T03:28:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.