Deceptive Logic Locking for Hardware Integrity Protection against
Machine Learning Attacks
- URL: http://arxiv.org/abs/2107.08695v1
- Date: Mon, 19 Jul 2021 09:08:14 GMT
- Title: Deceptive Logic Locking for Hardware Integrity Protection against
Machine Learning Attacks
- Authors: Dominik Sisejkovic, Farhad Merchant, Lennart M. Reimann, Rainer
Leupers
- Abstract summary: We present a theoretical model to test locking schemes for key-related structural leakage that can be exploited by machine learning.
We introduce D-MUX: a deceptive multiplexer-based logic-locking scheme that is resilient against structure-exploiting machine learning attacks.
To the best of our knowledge, D-MUX is the first machine-learning-resilient locking scheme capable of protecting against all known learning-based attacks.
- Score: 0.6868387710209244
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Logic locking has emerged as a prominent key-driven technique to protect the
integrity of integrated circuits. However, novel machine-learning-based attacks
have recently been introduced to challenge the security foundations of locking
schemes. These attacks are able to recover a significant percentage of the key
without having access to an activated circuit. This paper address this issue
through two focal points. First, we present a theoretical model to test locking
schemes for key-related structural leakage that can be exploited by machine
learning. Second, based on the theoretical model, we introduce D-MUX: a
deceptive multiplexer-based logic-locking scheme that is resilient against
structure-exploiting machine learning attacks. Through the design of D-MUX, we
uncover a major fallacy in existing multiplexer-based locking schemes in the
form of a structural-analysis attack. Finally, an extensive cost evaluation of
D-MUX is presented. To the best of our knowledge, D-MUX is the first
machine-learning-resilient locking scheme capable of protecting against all
known learning-based attacks. Hereby, the presented work offers a starting
point for the design and evaluation of future-generation logic locking in the
era of machine learning.
Related papers
- SubLock: Sub-Circuit Replacement based Input Dependent Key-based Logic Locking for Robust IP Protection [1.804933160047171]
Existing logic locking techniques are vulnerable to SAT-based attacks.
Several SAT-resistant logic locking methods are reported; they require significant overhead.
This paper proposes a novel input dependent key-based logic locking (IDKLL) that effectively prevents SAT-based attacks with low overhead.
arXiv Detail & Related papers (2024-06-27T11:17:06Z) - LIPSTICK: Corruptibility-Aware and Explainable Graph Neural Network-based Oracle-Less Attack on Logic Locking [1.104960878651584]
We develop, train, and test a corruptibility-aware graph neural network-based oracle-less attack on logic locking.
Our model is explainable in the sense that we analyze what the machine learning model has interpreted in the training process and how it can perform a successful attack.
arXiv Detail & Related papers (2024-02-06T18:42:51Z) - Exploiting Logic Locking for a Neural Trojan Attack on Machine Learning
Accelerators [4.605674633999923]
We show how logic locking can be used to compromise the security of a neural accelerator it protects.
Specifically, we show how the deterministic errors caused by incorrect keys can be harnessed to produce neural-trojan-style backdoors.
arXiv Detail & Related papers (2023-04-12T17:55:34Z) - Preprocessors Matter! Realistic Decision-Based Attacks on Machine
Learning Systems [56.64374584117259]
Decision-based attacks construct adversarial examples against a machine learning (ML) model by making only hard-label queries.
We develop techniques to (i) reverse-engineer the preprocessor and then (ii) use this extracted information to attack the end-to-end system.
Our preprocessors extraction method requires only a few hundred queries, and our preprocessor-aware attacks recover the same efficacy as when attacking the model alone.
arXiv Detail & Related papers (2022-10-07T03:10:34Z) - Logical blocks for fault-tolerant topological quantum computation [55.41644538483948]
We present a framework for universal fault-tolerant logic motivated by the need for platform-independent logical gate definitions.
We explore novel schemes for universal logic that improve resource overheads.
Motivated by the favorable logical error rates for boundaryless computation, we introduce a novel computational scheme.
arXiv Detail & Related papers (2021-12-22T19:00:03Z) - Logic Locking at the Frontiers of Machine Learning: A Survey on
Developments and Opportunities [0.6287267171078441]
This paper summarizes the recent developments in logic locking attacks and countermeasures at the frontiers of contemporary machine learning models.
Based on the presented work, the key takeaways, opportunities, and challenges are highlighted to offer recommendations for the design of next-generation logic locking.
arXiv Detail & Related papers (2021-07-05T10:18:26Z) - Challenging the Security of Logic Locking Schemes in the Era of Deep
Learning: A Neuroevolutionary Approach [0.2982610402087727]
Deep learning is being introduced in the domain of logic locking.
We present SnapShot: a novel attack on logic locking that is the first of its kind to utilize artificial neural networks.
We show that SnapShot achieves an average key prediction accuracy of 82.60% for the selected attack scenario.
arXiv Detail & Related papers (2020-11-20T13:03:19Z) - EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks [68.01125081367428]
Recent studies have shown that machine learning algorithms are vulnerable to adversarial attacks.
This article proposes to use narrow period pulse for poisoning attack of EEG-based BCIs, which is implementable in practice and has never been considered before.
arXiv Detail & Related papers (2020-10-30T20:49:42Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z) - Backflash Light as a Security Vulnerability in Quantum Key Distribution
Systems [77.34726150561087]
We review the security vulnerabilities of quantum key distribution (QKD) systems.
We mainly focus on a particular effect known as backflash light, which can be a source of eavesdropping attacks.
arXiv Detail & Related papers (2020-03-23T18:23:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.