LIPSTICK: Corruptibility-Aware and Explainable Graph Neural Network-based Oracle-Less Attack on Logic Locking
- URL: http://arxiv.org/abs/2402.04235v1
- Date: Tue, 6 Feb 2024 18:42:51 GMT
- Title: LIPSTICK: Corruptibility-Aware and Explainable Graph Neural Network-based Oracle-Less Attack on Logic Locking
- Authors: Yeganeh Aghamohammadi, Amin Rezaei,
- Abstract summary: We develop, train, and test a corruptibility-aware graph neural network-based oracle-less attack on logic locking.
Our model is explainable in the sense that we analyze what the machine learning model has interpreted in the training process and how it can perform a successful attack.
- Score: 1.104960878651584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In a zero-trust fabless paradigm, designers are increasingly concerned about hardware-based attacks on the semiconductor supply chain. Logic locking is a design-for-trust method that adds extra key-controlled gates in the circuits to prevent hardware intellectual property theft and overproduction. While attackers have traditionally relied on an oracle to attack logic-locked circuits, machine learning attacks have shown the ability to retrieve the secret key even without access to an oracle. In this paper, we first examine the limitations of state-of-the-art machine learning attacks and argue that the use of key hamming distance as the sole model-guiding structural metric is not always useful. Then, we develop, train, and test a corruptibility-aware graph neural network-based oracle-less attack on logic locking that takes into consideration both the structure and the behavior of the circuits. Our model is explainable in the sense that we analyze what the machine learning model has interpreted in the training process and how it can perform a successful attack. Chip designers may find this information beneficial in securing their designs while avoiding incremental fixes.
Related papers
- Convolutional Differentiable Logic Gate Networks [68.74313756770123]
We propose an approach for learning logic gate networks directly via a differentiable relaxation.
We build on this idea, extending it by deep logic gate tree convolutions and logical OR pooling.
On CIFAR-10, we achieve an accuracy of 86.29% using only 61 million logic gates, which improves over the SOTA while being 29x smaller.
arXiv Detail & Related papers (2024-11-07T14:12:00Z) - DECOR: Enhancing Logic Locking Against Machine Learning-Based Attacks [0.6131022957085439]
Logic locking (LL) has gained attention as a promising intellectual property protection measure for integrated circuits.
Recent attacks, facilitated by machine learning (ML), have shown the potential to predict the correct key in multiple LL schemes.
This paper presents a generic LL enhancement method based on a randomized algorithm that can significantly decrease the correlation between locked circuit netlist and correct key values in an LL scheme.
arXiv Detail & Related papers (2024-03-04T07:31:23Z) - Evil from Within: Machine Learning Backdoors through Hardware Trojans [72.99519529521919]
Backdoors pose a serious threat to machine learning, as they can compromise the integrity of security-critical systems, such as self-driving cars.
We introduce a backdoor attack that completely resides within a common hardware accelerator for machine learning.
We demonstrate the practical feasibility of our attack by implanting our hardware trojan into the Xilinx Vitis AI DPU.
arXiv Detail & Related papers (2023-04-17T16:24:48Z) - Exploiting Logic Locking for a Neural Trojan Attack on Machine Learning
Accelerators [4.605674633999923]
We show how logic locking can be used to compromise the security of a neural accelerator it protects.
Specifically, we show how the deterministic errors caused by incorrect keys can be harnessed to produce neural-trojan-style backdoors.
arXiv Detail & Related papers (2023-04-12T17:55:34Z) - An integrated Auto Encoder-Block Switching defense approach to prevent
adversarial attacks [0.0]
The vulnerability of state-of-the-art Neural Networks to adversarial input samples has increased drastically.
This article proposes a defense algorithm that utilizes the combination of an auto-encoder and block-switching architecture.
arXiv Detail & Related papers (2022-03-11T10:58:24Z) - Logical blocks for fault-tolerant topological quantum computation [55.41644538483948]
We present a framework for universal fault-tolerant logic motivated by the need for platform-independent logical gate definitions.
We explore novel schemes for universal logic that improve resource overheads.
Motivated by the favorable logical error rates for boundaryless computation, we introduce a novel computational scheme.
arXiv Detail & Related papers (2021-12-22T19:00:03Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z) - Deceptive Logic Locking for Hardware Integrity Protection against
Machine Learning Attacks [0.6868387710209244]
We present a theoretical model to test locking schemes for key-related structural leakage that can be exploited by machine learning.
We introduce D-MUX: a deceptive multiplexer-based logic-locking scheme that is resilient against structure-exploiting machine learning attacks.
To the best of our knowledge, D-MUX is the first machine-learning-resilient locking scheme capable of protecting against all known learning-based attacks.
arXiv Detail & Related papers (2021-07-19T09:08:14Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Challenging the Security of Logic Locking Schemes in the Era of Deep
Learning: A Neuroevolutionary Approach [0.2982610402087727]
Deep learning is being introduced in the domain of logic locking.
We present SnapShot: a novel attack on logic locking that is the first of its kind to utilize artificial neural networks.
We show that SnapShot achieves an average key prediction accuracy of 82.60% for the selected attack scenario.
arXiv Detail & Related papers (2020-11-20T13:03:19Z) - EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks [68.01125081367428]
Recent studies have shown that machine learning algorithms are vulnerable to adversarial attacks.
This article proposes to use narrow period pulse for poisoning attack of EEG-based BCIs, which is implementable in practice and has never been considered before.
arXiv Detail & Related papers (2020-10-30T20:49:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.