Backing the Wrong Horse: How Bit-Level Netlist Augmentation can Counter Power Side Channel Attacks
- URL: http://arxiv.org/abs/2510.04640v1
- Date: Mon, 06 Oct 2025 09:45:00 GMT
- Title: Backing the Wrong Horse: How Bit-Level Netlist Augmentation can Counter Power Side Channel Attacks
- Authors: Ali Asghar, Andreas Becher, Daniel Ziener,
- Abstract summary: dependence of power-consumption on the processed data is a known vulnerability of CMOS circuits.<n>Power-based side channel attacks can extract sensitive information, such as secret keys, from the implementation of cryptographic algorithms.
- Score: 0.45880283710344066
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The dependence of power-consumption on the processed data is a known vulnerability of CMOS circuits, resulting in side channels which can be exploited by power-based side channel attacks (SCAs). These attacks can extract sensitive information, such as secret keys, from the implementation of cryptographic algorithms. Existing countermeasures against power-based side channel attacks focus on analyzing information leakage at the byte level. However, this approach neglects the impact of individual bits on the overall resistance of a cryptographic implementation. In this work, we present a countermeasure based on single-bit leakage. The results suggest that the proposed countermeasure cannot be broken by attacks using conventional SCA leakage models.
Related papers
- Fooling the Decoder: An Adversarial Attack on Quantum Error Correction [49.48516314472825]
In this work, we target a basic RL surface code decoder (DeepQ) to create the first adversarial attack on quantum error correction.<n>We demonstrate an attack that reduces the logical qubit lifetime in memory experiments by up to five orders of magnitude.<n>This attack highlights the susceptibility of machine learning-based QEC and underscores the importance of further research into robust QEC methods.
arXiv Detail & Related papers (2025-04-28T10:10:05Z) - Unveiling ECC Vulnerabilities: LSTM Networks for Operation Recognition in Side-Channel Attacks [6.373405051241682]
We propose a novel approach for performing side-channel attacks on elliptic curve cryptography.<n>We adopt a long-short-term memory (LSTM) neural network to analyze a power trace and identify patterns of operation.<n>We show that current countermeasures, specifically the coordinate randomization technique, are not sufficient to protect against side channels.
arXiv Detail & Related papers (2025-02-24T17:02:40Z) - Power side-channel leakage localization through adversarial training of deep neural networks [10.840434597980723]
Supervised deep learning has emerged as an effective tool for carrying out power side-channel attacks on cryptographic implementations.
We propose a technique for identifying which timesteps in a power trace are responsible for leaking a cryptographic key.
arXiv Detail & Related papers (2024-10-29T18:04:41Z) - Systematic Use of Random Self-Reducibility against Physical Attacks [10.581645335323655]
This work presents a novel, black-box software-based countermeasure against physical attacks including power side-channel and fault-injection attacks.
The approach uses the concept of random self-reducibility and self-correctness to add randomness and redundancy in the execution for protection.
An end-to-end implementation of this countermeasure is demonstrated for RSA-CRT signature algorithm and Kyber Key Generation public key cryptosystems.
arXiv Detail & Related papers (2024-05-08T16:31:41Z) - Information Leakage through Physical Layer Supply Voltage Coupling Vulnerability [2.6490401904186758]
We introduce a novel side-channel vulnerability that leaks data-dependent power variations through physical layer supply voltage coupling (PSVC)
Unlike traditional power side-channel attacks, the proposed vulnerability allows an adversary to mount an attack and extract information without modifying the device.
arXiv Detail & Related papers (2024-03-12T23:39:54Z) - RandOhm: Mitigating Impedance Side-channel Attacks using Randomized Circuit Configurations [6.388730198692013]
We introduce RandOhm, which exploits a moving target defense (MTD) strategy based on the partial reconfiguration (PR) feature of mainstream FPGAs.
We demonstrate that the information leakage through the PDN impedance could be significantly reduced via runtime reconfiguration of the secret-sensitive parts of the circuitry.
In contrast to existing PR-based countermeasures, RandOhm deploys open-source bitstream manipulation tools to speed up the randomization and provide real-time protection.
arXiv Detail & Related papers (2024-01-17T02:22:28Z) - The Adversarial Implications of Variable-Time Inference [47.44631666803983]
We present an approach that exploits a novel side channel in which the adversary simply measures the execution time of the algorithm used to post-process the predictions of the ML model under attack.
We investigate leakage from the non-maximum suppression (NMS) algorithm, which plays a crucial role in the operation of object detectors.
We demonstrate attacks against the YOLOv3 detector, leveraging the timing leakage to successfully evade object detection using adversarial examples, and perform dataset inference.
arXiv Detail & Related papers (2023-09-05T11:53:17Z) - Fault-tolerant Coding for Entanglement-Assisted Communication [46.0607942851373]
This paper studies the study of fault-tolerant channel coding for quantum channels.
We use techniques from fault-tolerant quantum computing to establish coding theorems for sending classical and quantum information in this scenario.
We extend these methods to the case of entanglement-assisted communication, in particular proving that the fault-tolerant capacity approaches the usual capacity when the gate error approaches zero.
arXiv Detail & Related papers (2022-10-06T14:09:16Z) - On Trace of PGD-Like Adversarial Attacks [77.75152218980605]
Adversarial attacks pose safety and security concerns for deep learning applications.
We construct Adrial Response Characteristics (ARC) features to reflect the model's gradient consistency.
Our method is intuitive, light-weighted, non-intrusive, and data-undemanding.
arXiv Detail & Related papers (2022-05-19T14:26:50Z) - Balancing detectability and performance of attacks on the control
channel of Markov Decision Processes [77.66954176188426]
We investigate the problem of designing optimal stealthy poisoning attacks on the control channel of Markov decision processes (MDPs)
This research is motivated by the recent interest of the research community for adversarial and poisoning attacks applied to MDPs, and reinforcement learning (RL) methods.
arXiv Detail & Related papers (2021-09-15T09:13:10Z) - Defence against adversarial attacks using classical and quantum-enhanced
Boltzmann machines [64.62510681492994]
generative models attempt to learn the distribution underlying a dataset, making them inherently more robust to small perturbations.
We find improvements ranging from 5% to 72% against attacks with Boltzmann machines on the MNIST dataset.
arXiv Detail & Related papers (2020-12-21T19:00:03Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.