LaserEscape: Detecting and Mitigating Optical Probing Attacks
- URL: http://arxiv.org/abs/2405.03632v2
- Date: Fri, 30 Aug 2024 17:25:33 GMT
- Title: LaserEscape: Detecting and Mitigating Optical Probing Attacks
- Authors: Saleh Khalaj Monfared, Kyle Mitard, Andrew Cannon, Domenic Forte, Shahin Tajik,
- Abstract summary: We introduce LaserEscape, the first fully digital and FPGA-compatible countermeasure to detect and mitigate optical probing attacks.
LaserEscape incorporates digital delay-based sensors to reliably detect the physical alteration on the fabric caused by laser beam irradiations in real time.
As a response to the attack, LaserEscape deploys real-time hiding approaches using randomized hardware reconfigurability.
- Score: 5.4511018094405905
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The security of integrated circuits (ICs) can be broken by sophisticated physical attacks relying on failure analysis methods. Optical probing is one of the most prominent examples of such attacks, which can be accomplished in a matter of days, even with limited knowledge of the IC under attack. Unfortunately, few countermeasures are proposed in the literature, and none has been fabricated and tested in practice. These countermeasures usually require changing the standard cell libraries and, thus, are incompatible with digital and programmable platforms, such as field programmable gate arrays (FPGAs). In this work, we shift our attention from preventing the attack to detecting and responding to it. We introduce LaserEscape, the first fully digital and FPGA-compatible countermeasure to detect and mitigate optical probing attacks. LaserEscape incorporates digital delay-based sensors to reliably detect the physical alteration on the fabric caused by laser beam irradiations in real time. Furthermore, as a response to the attack, LaserEscape deploys real-time hiding approaches using randomized hardware reconfigurability. It realizes 1) moving target defense (MTD) to physically move the sensitive circuity under attack out of the probing field of focus to protect secret keys and 2) polymorphism to logically obfuscate the functionality of the targeted circuit to counter function extraction and reverse engineering attempts. We demonstrate the effectiveness and resiliency of our approach by performing optical probing attacks on protected and unprotected designs on a 28-nm FPGA. Our results show that optical probing attacks can be reliably detected and mitigated without interrupting the chip's operation.
Related papers
- Principles of Designing Robust Remote Face Anti-Spoofing Systems [60.05766968805833]
This paper sheds light on the vulnerabilities of state-of-the-art face anti-spoofing methods against digital attacks.
It presents a comprehensive taxonomy of common threats encountered in face anti-spoofing systems.
arXiv Detail & Related papers (2024-06-06T02:05:35Z) - Systematic Use of Random Self-Reducibility against Physical Attacks [10.581645335323655]
This work presents a novel, black-box software-based countermeasure against physical attacks including power side-channel and fault-injection attacks.
The approach uses the concept of random self-reducibility and self-correctness to add randomness and redundancy in the execution for protection.
An end-to-end implementation of this countermeasure is demonstrated for RSA-CRT signature algorithm and Kyber Key Generation public key cryptosystems.
arXiv Detail & Related papers (2024-05-08T16:31:41Z) - Modulation to the Rescue: Identifying Sub-Circuitry in the Transistor Morass for Targeted Analysis [7.303095838216346]
Physical attacks form one of the most severe threats against secure computing platforms.
We present and compare two techniques, namely laser logic state imaging (LLSI) and lock-in thermography (LIT)
We show that the time required to identify specific regions can be drastically reduced, thus lowering the complexity of physical attacks requiring positional information.
arXiv Detail & Related papers (2023-09-18T13:59:57Z) - Unified Adversarial Patch for Cross-modal Attacks in the Physical World [11.24237636482709]
We propose a unified adversarial patch to fool visible and infrared object detectors at the same time via a single patch.
Considering different imaging mechanisms of visible and infrared sensors, our work focuses on modeling the shapes of adversarial patches.
Results show that our unified patch achieves an Attack Success Rate (ASR) of 73.33% and 69.17%, respectively.
arXiv Detail & Related papers (2023-07-15T17:45:17Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Signal Injection Attacks against CCD Image Sensors [20.892354746682223]
We show how electromagnetic emanation can be used to manipulate the image information captured by a CCD image sensor.
Our results indicate that the injected distortion can disrupt automated vision-based intelligent systems.
arXiv Detail & Related papers (2021-08-19T19:05:28Z) - Adversarial Attacks and Mitigation for Anomaly Detectors of
Cyber-Physical Systems [6.417955560857806]
In this work, we present an adversarial attack that simultaneously evades the anomaly detectors and rule checkers of a CPS.
Inspired by existing gradient-based approaches, our adversarial attack crafts noise over the sensor and actuator values, then uses a genetic algorithm to optimise the latter.
We implement our approach for two real-world critical infrastructure testbeds, successfully reducing the classification accuracy of their detectors by over 50% on average.
arXiv Detail & Related papers (2021-05-22T12:19:03Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image
Classifiers [82.19722134082645]
A stealthy projector-based adversarial attack is proposed in this paper.
We approximate the real project-and-capture operation using a deep neural network named PCNet.
Our experiments show that the proposed SPAA clearly outperforms other methods by achieving higher attack success rates.
arXiv Detail & Related papers (2020-12-10T18:14:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.