Automated Hardware Logic Obfuscation Framework Using GPT
- URL: http://arxiv.org/abs/2405.12197v1
- Date: Mon, 20 May 2024 17:33:00 GMT
- Title: Automated Hardware Logic Obfuscation Framework Using GPT
- Authors: Banafsheh Saber Latibari, Sujan Ghimire, Muhtasim Alam Chowdhury, Najmeh Nazari, Kevin Immanuel Gubbi, Houman Homayoun, Avesta Sasan, Soheil Salehi,
- Abstract summary: We introduce Obfus-chat, a novel framework leveraging Generative Pre-trained Transformer (GPT) models to automate the obfuscation process.
The proposed framework accepts hardware design netlists and key sizes as inputs, and autonomously generates obfuscated code tailored to enhance security.
- Score: 3.1789948141373077
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Obfuscation stands as a promising solution for safeguarding hardware intellectual property (IP) against a spectrum of threats including reverse engineering, IP piracy, and tampering. In this paper, we introduce Obfus-chat, a novel framework leveraging Generative Pre-trained Transformer (GPT) models to automate the obfuscation process. The proposed framework accepts hardware design netlists and key sizes as inputs, and autonomously generates obfuscated code tailored to enhance security. To evaluate the effectiveness of our approach, we employ the Trust-Hub Obfuscation Benchmark for comparative analysis. We employed SAT attacks to assess the security of the design, along with functional verification procedures to ensure that the obfuscated design remains consistent with the original. Our results demonstrate the efficacy and efficiency of the proposed framework in fortifying hardware IP against potential threats, thus providing a valuable contribution to the field of hardware security.
Related papers
- Jailbreaking as a Reward Misspecification Problem [80.52431374743998]
We propose a novel perspective that attributes this vulnerability to reward misspecification during the alignment process.
We introduce a metric ReGap to quantify the extent of reward misspecification and demonstrate its effectiveness.
We present ReMiss, a system for automated red teaming that generates adversarial prompts in a reward-misspecified space.
arXiv Detail & Related papers (2024-06-20T15:12:27Z) - RTL Interconnect Obfuscation By Polymorphic Switch Boxes For Secure Hardware Generation [0.0]
We present an interconnect obfuscation scheme at the Register-Transfer Level (RTL) using Switch Boxes (SBs) constructed of Polymorphic Transistors.
A polymorphic SB can be designed using the same transistor count as its Complementary-Metal-Oxide-Semiconductor based counterpart.
arXiv Detail & Related papers (2024-04-11T01:42:01Z) - CodeChameleon: Personalized Encryption Framework for Jailbreaking Large
Language Models [49.60006012946767]
We propose CodeChameleon, a novel jailbreak framework based on personalized encryption tactics.
We conduct extensive experiments on 7 Large Language Models, achieving state-of-the-art average Attack Success Rate (ASR)
Remarkably, our method achieves an 86.6% ASR on GPT-4-1106.
arXiv Detail & Related papers (2024-02-26T16:35:59Z) - Defending Large Language Models against Jailbreak Attacks via Semantic
Smoothing [107.97160023681184]
Aligned large language models (LLMs) are vulnerable to jailbreaking attacks.
We propose SEMANTICSMOOTH, a smoothing-based defense that aggregates predictions of semantically transformed copies of a given input prompt.
arXiv Detail & Related papers (2024-02-25T20:36:03Z) - Mitigating Fine-tuning based Jailbreak Attack with Backdoor Enhanced Safety Alignment [56.2017039028998]
Fine-tuning of Language-Model-as-a-Service (LM) introduces new threats, particularly against the Fine-tuning based Jailbreak Attack (FJAttack)
We propose the Backdoor Enhanced Safety Alignment method inspired by an analogy with the concept of backdoor attacks.
Our comprehensive experiments demonstrate that through the Backdoor Enhanced Safety Alignment with adding as few as 11 safety examples, the maliciously finetuned LLMs will achieve similar safety performance as the original aligned models without harming the benign performance.
arXiv Detail & Related papers (2024-02-22T21:05:18Z) - HOACS: Homomorphic Obfuscation Assisted Concealing of Secrets to Thwart Trojan Attacks in COTS Processor [0.6874745415692134]
We propose a software-oriented countermeasure to ensure the confidentiality of secret assets against hardware Trojans.
The proposed solution does not require any supply chain entity to be trusted and does not require analysis or modification of the IC design.
We have implemented the proposed solution to protect the secret key within the Advanced Encryption Standard (AES) program and presented a detailed security analysis.
arXiv Detail & Related papers (2024-02-15T04:33:30Z) - Secure Synthesis of Distributed Cryptographic Applications (Technical Report) [1.9707603524984119]
We advocate using secure program partitioning to synthesize cryptographic applications.
This approach is promising, but formal results for the security of such compilers are limited in scope.
We develop a compiler security proof that handles subtleties essential for robust, efficient applications.
arXiv Detail & Related papers (2024-01-06T02:57:44Z) - Performance-lossless Black-box Model Watermarking [69.22653003059031]
We propose a branch backdoor-based model watermarking protocol to protect model intellectual property.
In addition, we analyze the potential threats to the protocol and provide a secure and feasible watermarking instance for language models.
arXiv Detail & Related papers (2023-12-11T16:14:04Z) - Code Polymorphism Meets Code Encryption: Confidentiality and Side-Channel Protection of Software Components [0.0]
PolEn is a toolchain and a processor architecturethat combine countermeasures in order to provide an effective mitigation of side-channel attacks.
Code encryption is supported by a processor extension such that machineinstructions are only decrypted inside the CPU.
Code polymorphism is implemented by software means. It regularly changes the observablebehaviour of the program, making it unpredictable for an attacker.
arXiv Detail & Related papers (2023-10-11T09:16:10Z) - A Novel Approach to Identify Security Controls in Source Code [4.598579706242066]
This paper enumerates a comprehensive list of commonly used security controls and creates a dataset for each one of them.
It uses the state-of-the-art NLP technique Bidirectional Representations from Transformers (BERT) and the Tactic Detector from our prior work to show that security controls could be identified with high confidence.
arXiv Detail & Related papers (2023-07-10T21:14:39Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.