Unleashing GHOST: An LLM-Powered Framework for Automated Hardware Trojan Design
- URL: http://arxiv.org/abs/2412.02816v1
- Date: Tue, 03 Dec 2024 20:33:29 GMT
- Title: Unleashing GHOST: An LLM-Powered Framework for Automated Hardware Trojan Design
- Authors: Md Omar Faruque, Peter Jamieson, Ahmad Patooghy, Abdel-Hameed A. Badawy,
- Abstract summary: GHOST is an automated attack framework that leverages Large Language Models (LLMs) for rapid HT generation and insertion.
Study shows that 100% of GHOST-generated synthizable HTs evaded detection by an ML-generated HT detection tool.
- Score: 0.0
- License:
- Abstract: Traditionally, inserting realistic Hardware Trojans (HTs) into complex hardware systems has been a time-consuming and manual process, requiring comprehensive knowledge of the design and navigating intricate Hardware Description Language (HDL) codebases. Machine Learning (ML)-based approaches have attempted to automate this process but often face challenges such as the need for extensive training data, long learning times, and limited generalizability across diverse hardware design landscapes. This paper addresses these challenges by proposing GHOST (Generator for Hardware-Oriented Stealthy Trojans), an automated attack framework that leverages Large Language Models (LLMs) for rapid HT generation and insertion. Our study evaluates three state-of-the-art LLMs - GPT-4, Gemini-1.5-pro, and Llama-3-70B - across three hardware designs: SRAM, AES, and UART. According to our evaluations, GPT-4 demonstrates superior performance, with 88.88% of HT insertion attempts successfully generating functional and synthesizable HTs. This study also highlights the security risks posed by LLM-generated HTs, showing that 100% of GHOST-generated synthesizable HTs evaded detection by an ML-based HT detection tool. These results underscore the urgent need for advanced detection and prevention mechanisms in hardware security to address the emerging threat of LLM-generated HTs. The GHOST HT benchmarks are available at: https://github.com/HSTRG1/GHOSTbenchmarks.git
Related papers
- Exploring Code Language Models for Automated HLS-based Hardware Generation: Benchmark, Infrastructure and Analysis [49.998130983414924]
Large language models (LLMs) can be employed for programming languages such as Python and C++.
This paper explores leveraging LLMs to generate High-Level Synthesis (HLS)-based hardware design.
arXiv Detail & Related papers (2025-02-19T17:53:59Z) - Hadamard Attention Recurrent Transformer: A Strong Baseline for Stereo Matching Transformer [54.97718043685824]
We present the textbfHadamard textbfAttention textbfRecurrent Stereo textbfTransformer (HART)
For faster inference, we present a Hadamard product paradigm for the attention mechanism, achieving linear computational complexity.
We design a Dense Attention Kernel (DAK) to amplify the differences between relevant and irrelevant feature responses.
We propose MKOI to capture both global and local information through the interleaving of large and small kernel convolutions.
arXiv Detail & Related papers (2025-01-02T02:51:16Z) - SENTAUR: Security EnhaNced Trojan Assessment Using LLMs Against Undesirable Revisions [17.21926121783922]
Hardware Trojan (HT) can introduce stealthy behavior, prevent an IC work as intended, or leak sensitive data via side channels.
To counter HTs, rapidly examining HT scenarios is a key requirement.
We propose a large language model (LLM) framework to generate a suite of legitimate HTs for a Register Transfer Level (RTL) design.
arXiv Detail & Related papers (2024-07-17T07:13:06Z) - TrojanForge: Generating Adversarial Hardware Trojan Examples Using Reinforcement Learning [0.0]
Hardware Trojan problem can be thought of as a continuous game between attackers and defenders.
Machine Learning has recently played a key role in advancing HT research.
TrojanForge generates adversarial examples that defeat HT detectors.
arXiv Detail & Related papers (2024-05-24T03:37:32Z) - Holographic Global Convolutional Networks for Long-Range Prediction Tasks in Malware Detection [50.7263393517558]
We introduce Holographic Global Convolutional Networks (HGConv) that utilize the properties of Holographic Reduced Representations (HRR)
Unlike other global convolutional methods, our method does not require any intricate kernel computation or crafted kernel design.
The proposed method has achieved new SOTA results on Microsoft Malware Classification Challenge, Drebin, and EMBER malware benchmarks.
arXiv Detail & Related papers (2024-03-23T15:49:13Z) - Selene: Pioneering Automated Proof in Software Verification [62.09555413263788]
We introduce Selene, which is the first project-level automated proof benchmark constructed based on the real-world industrial-level operating system microkernel, seL4.
Our experimental results with advanced large language models (LLMs), such as GPT-3.5-turbo and GPT-4, highlight the capabilities of LLMs in the domain of automated proof generation.
arXiv Detail & Related papers (2024-01-15T13:08:38Z) - Evasive Hardware Trojan through Adversarial Power Trace [6.949268510101616]
We introduce a HT obfuscation (HTO) approach to allow HTs to bypass detection method.
HTO can be implemented with only a single transistor for ASICs and FPGAs.
We show that an adaptive attacker can still design evasive HTOs by constraining the design with a spectral noise budget.
arXiv Detail & Related papers (2024-01-04T16:28:15Z) - LLM4EDA: Emerging Progress in Large Language Models for Electronic
Design Automation [74.7163199054881]
Large Language Models (LLMs) have demonstrated their capability in context understanding, logic reasoning and answer generation.
We present a systematic study on the application of LLMs in the EDA field.
We highlight the future research direction, focusing on applying LLMs in logic synthesis, physical design, multi-modal feature extraction and alignment of circuits.
arXiv Detail & Related papers (2023-12-28T15:09:14Z) - Trojan Playground: A Reinforcement Learning Framework for Hardware Trojan Insertion and Detection [0.0]
Current Hardware Trojan (HT) detection techniques are mostly developed based on a limited set of HT benchmarks.
We introduce the first automated Reinforcement Learning (RL) HT insertion and detection framework to address these shortcomings.
arXiv Detail & Related papers (2023-05-16T16:42:07Z) - ATTRITION: Attacking Static Hardware Trojan Detection Techniques Using
Reinforcement Learning [6.87143729255904]
We develop an automated, scalable, and practical attack framework, ATTRITION, using reinforcement learning (RL)
ATTRITION evades eight detection techniques across two HT detection categories, showcasing its behavior.
We demonstrate ATTRITION's ability to evade detection techniques by evaluating designs ranging from the widely-used academic suites to larger designs such as the open-source MIPS and mor1kx processors to AES and a GPS module.
arXiv Detail & Related papers (2022-08-26T23:47:47Z) - HTLM: Hyper-Text Pre-Training and Prompting of Language Models [52.32659647159799]
We introduce HTLM, a hyper-text language model trained on a large-scale web crawl.
We show that pretraining with a BART-style denoising loss directly on simplified HTML provides highly effective transfer for a wide range of end tasks and supervision levels.
We find that hyper-text prompts provide more value to HTLM, in terms of data efficiency, than plain text prompts do for existing LMs.
arXiv Detail & Related papers (2021-07-14T19:39:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.