Challenging the Security of Logic Locking Schemes in the Era of Deep
Learning: A Neuroevolutionary Approach
- URL: http://arxiv.org/abs/2011.10389v2
- Date: Mon, 30 Nov 2020 08:49:19 GMT
- Title: Challenging the Security of Logic Locking Schemes in the Era of Deep
Learning: A Neuroevolutionary Approach
- Authors: Dominik Sisejkovic, Farhad Merchant, Lennart M. Reimann, Harshit
Srivastava, Ahmed Hallawa and Rainer Leupers
- Abstract summary: Deep learning is being introduced in the domain of logic locking.
We present SnapShot: a novel attack on logic locking that is the first of its kind to utilize artificial neural networks.
We show that SnapShot achieves an average key prediction accuracy of 82.60% for the selected attack scenario.
- Score: 0.2982610402087727
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Logic locking is a prominent technique to protect the integrity of hardware
designs throughout the integrated circuit design and fabrication flow. However,
in recent years, the security of locking schemes has been thoroughly challenged
by the introduction of various deobfuscation attacks. As in most research
branches, deep learning is being introduced in the domain of logic locking as
well. Therefore, in this paper we present SnapShot: a novel attack on logic
locking that is the first of its kind to utilize artificial neural networks to
directly predict a key bit value from a locked synthesized gate-level netlist
without using a golden reference. Hereby, the attack uses a simpler yet more
flexible learning model compared to existing work. Two different approaches are
evaluated. The first approach is based on a simple feedforward fully connected
neural network. The second approach utilizes genetic algorithms to evolve more
complex convolutional neural network architectures specialized for the given
task. The attack flow offers a generic and customizable framework for attacking
locking schemes using machine learning techniques. We perform an extensive
evaluation of SnapShot for two realistic attack scenarios, comprising both
reference benchmark circuits as well as silicon-proven RISC-V core modules. The
evaluation results show that SnapShot achieves an average key prediction
accuracy of 82.60% for the selected attack scenario, with a significant
performance increase of 10.49 percentage points compared to the state of the
art. Moreover, SnapShot outperforms the existing technique on all evaluated
benchmarks. The results indicate that the security foundation of common logic
locking schemes is build on questionable assumptions. The conclusions of the
evaluation offer insights into the challenges of designing future logic locking
schemes that are resilient to machine learning attacks.
Related papers
- SubLock: Sub-Circuit Replacement based Input Dependent Key-based Logic Locking for Robust IP Protection [1.804933160047171]
Existing logic locking techniques are vulnerable to SAT-based attacks.
Several SAT-resistant logic locking methods are reported; they require significant overhead.
This paper proposes a novel input dependent key-based logic locking (IDKLL) that effectively prevents SAT-based attacks with low overhead.
arXiv Detail & Related papers (2024-06-27T11:17:06Z) - DECOR: Enhancing Logic Locking Against Machine Learning-Based Attacks [0.6131022957085439]
Logic locking (LL) has gained attention as a promising intellectual property protection measure for integrated circuits.
Recent attacks, facilitated by machine learning (ML), have shown the potential to predict the correct key in multiple LL schemes.
This paper presents a generic LL enhancement method based on a randomized algorithm that can significantly decrease the correlation between locked circuit netlist and correct key values in an LL scheme.
arXiv Detail & Related papers (2024-03-04T07:31:23Z) - LIPSTICK: Corruptibility-Aware and Explainable Graph Neural Network-based Oracle-Less Attack on Logic Locking [1.104960878651584]
We develop, train, and test a corruptibility-aware graph neural network-based oracle-less attack on logic locking.
Our model is explainable in the sense that we analyze what the machine learning model has interpreted in the training process and how it can perform a successful attack.
arXiv Detail & Related papers (2024-02-06T18:42:51Z) - Blockchain Smart Contract Threat Detection Technology Based on Symbolic
Execution [0.0]
Reentrancy vulnerability, which is hidden and complex, poses a great threat to smart contracts.
In this paper, we propose a smart contract threat detection technology based on symbolic execution.
The experimental results show that this method significantly increases both detection efficiency and accuracy.
arXiv Detail & Related papers (2023-12-24T03:27:03Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Large-Scale Sequential Learning for Recommender and Engineering Systems [91.3755431537592]
In this thesis, we focus on the design of an automatic algorithms that provide personalized ranking by adapting to the current conditions.
For the former, we propose novel algorithm called SAROS that take into account both kinds of feedback for learning over the sequence of interactions.
The proposed idea of taking into account the neighbour lines shows statistically significant results in comparison with the initial approach for faults detection in power grid.
arXiv Detail & Related papers (2022-05-13T21:09:41Z) - Defensive Tensorization [113.96183766922393]
We propose tensor defensiveization, an adversarial defence technique that leverages a latent high-order factorization of the network.
We empirically demonstrate the effectiveness of our approach on standard image classification benchmarks.
We validate the versatility of our approach across domains and low-precision architectures by considering an audio task and binary networks.
arXiv Detail & Related papers (2021-10-26T17:00:16Z) - Deceptive Logic Locking for Hardware Integrity Protection against
Machine Learning Attacks [0.6868387710209244]
We present a theoretical model to test locking schemes for key-related structural leakage that can be exploited by machine learning.
We introduce D-MUX: a deceptive multiplexer-based logic-locking scheme that is resilient against structure-exploiting machine learning attacks.
To the best of our knowledge, D-MUX is the first machine-learning-resilient locking scheme capable of protecting against all known learning-based attacks.
arXiv Detail & Related papers (2021-07-19T09:08:14Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Closed Loop Neural-Symbolic Learning via Integrating Neural Perception,
Grammar Parsing, and Symbolic Reasoning [134.77207192945053]
Prior methods learn the neural-symbolic models using reinforcement learning approaches.
We introduce the textbfgrammar model as a textitsymbolic prior to bridge neural perception and symbolic reasoning.
We propose a novel textbfback-search algorithm which mimics the top-down human-like learning procedure to propagate the error.
arXiv Detail & Related papers (2020-06-11T17:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.