RESAA: A Removal and Structural Analysis Attack Against Compound Logic Locking
- URL: http://arxiv.org/abs/2409.16959v1
- Date: Wed, 25 Sep 2024 14:14:36 GMT
- Title: RESAA: A Removal and Structural Analysis Attack Against Compound Logic Locking
- Authors: Felipe Almeida, Levent Aksoy, Samuel Pagliarini,
- Abstract summary: This paper presents a novel framework, RESAA, designed to classify CLL-locked designs, identify critical gates, and execute various attacks to uncover secret keys.
Results show that RESAA can achieve up to 92.6% accuracy on a relatively complex ITC'99 benchmark circuit.
- Score: 2.7309692684728617
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The semiconductor industry's paradigm shift towards fabless integrated circuit (IC) manufacturing has introduced security threats, including piracy, counterfeiting, hardware Trojans, and overproduction. In response to these challenges, various countermeasures, including Logic locking (LL), have been proposed to protect designs and mitigate security risks. LL is likely the most researched form of intellectual property (IP) protection for ICs. A significant advance has been made with the introduction of compound logic locking (CLL), where two LL techniques are concurrently utilized for improved resiliency against attacks. However, the vulnerabilities of LL techniques, particularly CLL, need to be explored further. This paper presents a novel framework, RESAA, designed to classify CLL-locked designs, identify critical gates, and execute various attacks to uncover secret keys. RESAA is agnostic to specific LL techniques, offering comprehensive insights into CLL's security scenarios. Experimental results demonstrate RESAA's efficacy in identifying critical gates, distinguishing segments corresponding to different LL techniques, and determining associated keys based on different threat models. In particular, for the oracle-less threat model, RESAA can achieve up to 92.6% accuracy on a relatively complex ITC'99 benchmark circuit. The results reported in this paper emphasize the significance of evaluation and thoughtful selection of LL techniques, as all studied CLL variants demonstrated vulnerability to our framework. RESAA is also open-sourced for the community at large.
Related papers
- Iterative Self-Tuning LLMs for Enhanced Jailbreaking Capabilities [63.603861880022954]
We introduce ADV-LLM, an iterative self-tuning process that crafts adversarial LLMs with enhanced jailbreak ability.
Our framework significantly reduces the computational cost of generating adversarial suffixes while achieving nearly 100% ASR on various open-source LLMs.
It exhibits strong attack transferability to closed-source models, achieving 99% ASR on GPT-3.5 and 49% ASR on GPT-4, despite being optimized solely on Llama3.
arXiv Detail & Related papers (2024-10-24T06:36:12Z) - Defensive Prompt Patch: A Robust and Interpretable Defense of LLMs against Jailbreak Attacks [59.46556573924901]
This paper introduces Defensive Prompt Patch (DPP), a novel prompt-based defense mechanism for large language models (LLMs)
Unlike previous approaches, DPP is designed to achieve a minimal Attack Success Rate (ASR) while preserving the high utility of LLMs.
Empirical results conducted on LLAMA-2-7B-Chat and Mistral-7B-Instruct-v0.2 models demonstrate the robustness and adaptability of DPP.
arXiv Detail & Related papers (2024-05-30T14:40:35Z) - SoK: A Defense-Oriented Evaluation of Software Supply Chain Security [3.165193382160046]
We argue that the next stage of software supply chain security research and development will benefit greatly from a defense-oriented approach.
This paper introduces the AStRA model, a framework for representing fundamental software supply chain elements and their causal relationships.
arXiv Detail & Related papers (2024-05-23T18:53:48Z) - DECOR: Enhancing Logic Locking Against Machine Learning-Based Attacks [0.6131022957085439]
Logic locking (LL) has gained attention as a promising intellectual property protection measure for integrated circuits.
Recent attacks, facilitated by machine learning (ML), have shown the potential to predict the correct key in multiple LL schemes.
This paper presents a generic LL enhancement method based on a randomized algorithm that can significantly decrease the correlation between locked circuit netlist and correct key values in an LL scheme.
arXiv Detail & Related papers (2024-03-04T07:31:23Z) - Data Poisoning for In-context Learning [49.77204165250528]
In-context learning (ICL) has been recognized for its innovative ability to adapt to new tasks.
This paper delves into the critical issue of ICL's susceptibility to data poisoning attacks.
We introduce ICLPoison, a specialized attacking framework conceived to exploit the learning mechanisms of ICL.
arXiv Detail & Related papers (2024-02-03T14:20:20Z) - CAC 2.0: A Corrupt and Correct Logic Locking Technique Resilient to Structural Analysis Attacks [2.949446809950691]
State-of-the-art logic locking techniques, including the prominent corrupt and correct (CAC) technique, are resilient to satisfiability (SAT)-based and removal attacks.
This paper introduces an improved version of CAC, called CAC 2.0, which increases the search space of structural analysis attacks using obfuscation.
This paper also introduces an open source logic locking tool, called HIID, equipped with well-known techniques including CAC 2.0.
arXiv Detail & Related papers (2024-01-13T19:09:22Z) - KRATT: QBF-Assisted Removal and Structural Analysis Attack Against Logic Locking [2.949446809950691]
KRATT is a removal and structural analysis attack against state-of-the-art logic locking techniques.
It can handle locked circuits under both oracle-less (OL) and oracle-guided (OG) threat models.
It can decipher a large number of key inputs of Ss and Ds with high accuracy under the OL threat model, and can easily find the secret key of Ds under the OG threat model.
arXiv Detail & Related papers (2023-11-10T10:51:00Z) - Online Safety Property Collection and Refinement for Safe Deep
Reinforcement Learning in Mapless Navigation [79.89605349842569]
We introduce the Collection and Refinement of Online Properties (CROP) framework to design properties at training time.
CROP employs a cost signal to identify unsafe interactions and use them to shape safety properties.
We evaluate our approach in several robotic mapless navigation tasks and demonstrate that the violation metric computed with CROP allows higher returns and lower violations over previous Safe DRL approaches.
arXiv Detail & Related papers (2023-02-13T21:19:36Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Safe RAN control: A Symbolic Reinforcement Learning Approach [62.997667081978825]
We present a Symbolic Reinforcement Learning (SRL) based architecture for safety control of Radio Access Network (RAN) applications.
We provide a purely automated procedure in which a user can specify high-level logical safety specifications for a given cellular network topology.
We introduce a user interface (UI) developed to help a user set intent specifications to the system, and inspect the difference in agent proposed actions.
arXiv Detail & Related papers (2021-06-03T16:45:40Z) - Multi-context Attention Fusion Neural Network for Software Vulnerability
Identification [4.05739885420409]
We propose a deep learning model that learns to detect some of the common categories of security vulnerabilities in source code efficiently.
The model builds an accurate understanding of code semantics with a lot less learnable parameters.
The proposed AI achieves 98.40% F1-score on specific CWEs from the benchmarked NIST SARD dataset.
arXiv Detail & Related papers (2021-04-19T11:50:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.