KRATT: QBF-Assisted Removal and Structural Analysis Attack Against Logic Locking
- URL: http://arxiv.org/abs/2311.05982v1
- Date: Fri, 10 Nov 2023 10:51:00 GMT
- Title: KRATT: QBF-Assisted Removal and Structural Analysis Attack Against Logic Locking
- Authors: Levent Aksoy, Muhammad Yasin, Samuel Pagliarini,
- Abstract summary: KRATT is a removal and structural analysis attack against state-of-the-art logic locking techniques.
It can handle locked circuits under both oracle-less (OL) and oracle-guided (OG) threat models.
It can decipher a large number of key inputs of Ss and Ds with high accuracy under the OL threat model, and can easily find the secret key of Ds under the OG threat model.
- Score: 2.949446809950691
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces KRATT, a removal and structural analysis attack against state-of-the-art logic locking techniques, such as single and double flip locking techniques (SFLTs and DFLTs). KRATT utilizes powerful quantified Boolean formulas (QBFs), which have not found widespread use in hardware security, to find the secret key of SFLTs for the first time. It can handle locked circuits under both oracle-less (OL) and oracle-guided (OG) threat models. It modifies the locked circuit and uses a prominent OL attack to make a strong guess under the OL threat model. It uses a structural analysis technique to identify promising protected input patterns and explores them using the oracle under the OG model. Experimental results on ISCAS'85, ITC'99, and HeLLO: CTF'22 benchmarks show that KRATT can break SFLTs using a QBF formulation in less than a minute, can decipher a large number of key inputs of SFLTs and DFLTs with high accuracy under the OL threat model, and can easily find the secret key of DFLTs under the OG threat model. It is shown that KRATT outperforms publicly available OL and OG attacks in terms of solution quality and run-time.
Related papers
- A Realistic Threat Model for Large Language Model Jailbreaks [87.64278063236847]
In this work, we propose a unified threat model for the principled comparison of jailbreak attacks.
Our threat model combines constraints in perplexity, measuring how far a jailbreak deviates from natural text.
We adapt popular attacks to this new, realistic threat model, with which we, for the first time, benchmark these attacks on equal footing.
arXiv Detail & Related papers (2024-10-21T17:27:01Z) - RESAA: A Removal and Structural Analysis Attack Against Compound Logic Locking [2.7309692684728617]
This paper presents a novel framework, RESAA, designed to classify CLL-locked designs, identify critical gates, and execute various attacks to uncover secret keys.
Results show that RESAA can achieve up to 92.6% accuracy on a relatively complex ITC'99 benchmark circuit.
arXiv Detail & Related papers (2024-09-25T14:14:36Z) - SubLock: Sub-Circuit Replacement based Input Dependent Key-based Logic Locking for Robust IP Protection [1.804933160047171]
Existing logic locking techniques are vulnerable to SAT-based attacks.
Several SAT-resistant logic locking methods are reported; they require significant overhead.
This paper proposes a novel input dependent key-based logic locking (IDKLL) that effectively prevents SAT-based attacks with low overhead.
arXiv Detail & Related papers (2024-06-27T11:17:06Z) - TrojFM: Resource-efficient Backdoor Attacks against Very Large Foundation Models [69.37990698561299]
TrojFM is a novel backdoor attack tailored for very large foundation models.
Our approach injects backdoors by fine-tuning only a very small proportion of model parameters.
We demonstrate that TrojFM can launch effective backdoor attacks against widely used large GPT-style models.
arXiv Detail & Related papers (2024-05-27T03:10:57Z) - RTL Interconnect Obfuscation By Polymorphic Switch Boxes For Secure Hardware Generation [0.0]
We present an interconnect obfuscation scheme at the Register-Transfer Level (RTL) using Switch Boxes (SBs) constructed of Polymorphic Transistors.
A polymorphic SB can be designed using the same transistor count as its Complementary-Metal-Oxide-Semiconductor based counterpart.
arXiv Detail & Related papers (2024-04-11T01:42:01Z) - Defending Large Language Models against Jailbreak Attacks via Semantic
Smoothing [107.97160023681184]
Aligned large language models (LLMs) are vulnerable to jailbreaking attacks.
We propose SEMANTICSMOOTH, a smoothing-based defense that aggregates predictions of semantically transformed copies of a given input prompt.
arXiv Detail & Related papers (2024-02-25T20:36:03Z) - LIPSTICK: Corruptibility-Aware and Explainable Graph Neural Network-based Oracle-Less Attack on Logic Locking [1.104960878651584]
We develop, train, and test a corruptibility-aware graph neural network-based oracle-less attack on logic locking.
Our model is explainable in the sense that we analyze what the machine learning model has interpreted in the training process and how it can perform a successful attack.
arXiv Detail & Related papers (2024-02-06T18:42:51Z) - Weak-to-Strong Jailbreaking on Large Language Models [96.50953637783581]
Large language models (LLMs) are vulnerable to jailbreak attacks.
Existing jailbreaking methods are computationally costly.
We propose the weak-to-strong jailbreaking attack.
arXiv Detail & Related papers (2024-01-30T18:48:37Z) - A practical key-recovery attack on LWE-based key-encapsulation mechanism schemes using Rowhammer [6.173770515883933]
We propose a microarchitectural end-to-end attack methodology on generic lattice-based post-quantum key encapsulation mechanisms.
Our attack targets a critical component of a Fujisaki-Okamoto transform that is used in the construction of almost all lattice-based key encapsulation mechanisms.
arXiv Detail & Related papers (2023-11-14T09:40:08Z) - One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training [54.622474306336635]
A new weight modification attack called bit flip attack (BFA) was proposed, which exploits memory fault inject techniques.
We propose a training-assisted bit flip attack, in which the adversary is involved in the training stage to build a high-risk model to release.
arXiv Detail & Related papers (2023-08-12T09:34:43Z) - Block Switching: A Stochastic Approach for Deep Learning Security [75.92824098268471]
Recent study of adversarial attacks has revealed the vulnerability of modern deep learning models.
In this paper, we introduce Block Switching (BS), a defense strategy against adversarial attacks based on onity.
arXiv Detail & Related papers (2020-02-18T23:14:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.