Unveiling ECC Vulnerabilities: LSTM Networks for Operation Recognition in Side-Channel Attacks
- URL: http://arxiv.org/abs/2502.17330v1
- Date: Mon, 24 Feb 2025 17:02:40 GMT
- Title: Unveiling ECC Vulnerabilities: LSTM Networks for Operation Recognition in Side-Channel Attacks
- Authors: Alberto Battistello, Guido Bertoni, Michele Corrias, Lorenzo Nava, Davide Rusconi, Matteo Zoia, Fabio Pierazzi, Andrea Lanzi,
- Abstract summary: We propose a novel approach for performing side-channel attacks on elliptic curve cryptography.<n>We adopt a long-short-term memory (LSTM) neural network to analyze a power trace and identify patterns of operation.<n>We show that current countermeasures, specifically the coordinate randomization technique, are not sufficient to protect against side channels.
- Score: 6.373405051241682
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose a novel approach for performing side-channel attacks on elliptic curve cryptography. Unlike previous approaches and inspired by the ``activity detection'' literature, we adopt a long-short-term memory (LSTM) neural network to analyze a power trace and identify patterns of operation in the scalar multiplication algorithm performed during an ECDSA signature, that allows us to recover bits of the ephemeral key, and thus retrieve the signer's private key. Our approach is based on the fact that modular reductions are conditionally performed by micro-ecc and depend on key bits. We evaluated the feasibility and reproducibility of our attack through experiments in both simulated and real implementations. We demonstrate the effectiveness of our attack by implementing it on a real target device, an STM32F415 with the micro-ecc library, and successfully compromise it. Furthermore, we show that current countermeasures, specifically the coordinate randomization technique, are not sufficient to protect against side channels. Finally, we suggest other approaches that may be implemented to thwart our attack.
Related papers
- Power side-channel leakage localization through adversarial training of deep neural networks [10.840434597980723]
Supervised deep learning has emerged as an effective tool for carrying out power side-channel attacks on cryptographic implementations.
We propose a technique for identifying which timesteps in a power trace are responsible for leaking a cryptographic key.
arXiv Detail & Related papers (2024-10-29T18:04:41Z) - Systematic Use of Random Self-Reducibility against Physical Attacks [10.581645335323655]
This work presents a novel, black-box software-based countermeasure against physical attacks including power side-channel and fault-injection attacks.
The approach uses the concept of random self-reducibility and self-correctness to add randomness and redundancy in the execution for protection.
An end-to-end implementation of this countermeasure is demonstrated for RSA-CRT signature algorithm and Kyber Key Generation public key cryptosystems.
arXiv Detail & Related papers (2024-05-08T16:31:41Z) - Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable [70.77600345240867]
A novel arbitrary-in-arbitrary-out (AIAO) strategy makes watermarks resilient to fine-tuning-based removal.
Unlike the existing methods of designing a backdoor for the input/output space of diffusion models, in our method, we propose to embed the backdoor into the feature space of sampled subpaths.
Our empirical studies on the MS-COCO, AFHQ, LSUN, CUB-200, and DreamBooth datasets confirm the robustness of AIAO.
arXiv Detail & Related papers (2024-05-01T12:03:39Z) - Carry Your Fault: A Fault Propagation Attack on Side-Channel Protected LWE-based KEM [12.164927192334748]
We propose a new fault attack on side-channel secure masked implementation of LWE-based key-encapsulation mechanisms.
We exploit the data dependency of the adder carry chain in A2B and extract sensitive information.
We show key recovery attacks of Kyber, although the leakage also exists for other schemes like Saber.
arXiv Detail & Related papers (2024-01-25T11:18:43Z) - The Adversarial Implications of Variable-Time Inference [47.44631666803983]
We present an approach that exploits a novel side channel in which the adversary simply measures the execution time of the algorithm used to post-process the predictions of the ML model under attack.
We investigate leakage from the non-maximum suppression (NMS) algorithm, which plays a crucial role in the operation of object detectors.
We demonstrate attacks against the YOLOv3 detector, leveraging the timing leakage to successfully evade object detection using adversarial examples, and perform dataset inference.
arXiv Detail & Related papers (2023-09-05T11:53:17Z) - On Borrowed Time -- Preventing Static Side-Channel Analysis [13.896152066919036]
adversaries exploit leakage or response behaviour of integrated circuits in a static state.<n>Members of this class include Static Power Side-Channel Analysis (SCA), Laser Logic State Imaging (LLSI) and Impedance Analysis (IA)
arXiv Detail & Related papers (2023-07-18T06:36:04Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Balancing detectability and performance of attacks on the control
channel of Markov Decision Processes [77.66954176188426]
We investigate the problem of designing optimal stealthy poisoning attacks on the control channel of Markov decision processes (MDPs)
This research is motivated by the recent interest of the research community for adversarial and poisoning attacks applied to MDPs, and reinforcement learning (RL) methods.
arXiv Detail & Related papers (2021-09-15T09:13:10Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.