LLM-IFT: LLM-Powered Information Flow Tracking for Secure Hardware
- URL: http://arxiv.org/abs/2504.07015v1
- Date: Wed, 09 Apr 2025 16:32:13 GMT
- Title: LLM-IFT: LLM-Powered Information Flow Tracking for Secure Hardware
- Authors: Nowfel Mashnoor, Mohammad Akyash, Hadi Kamali, Kimia Azar,
- Abstract summary: Information flow tracking (IFT) is used to identify unauthorized activities that may compromise confidentiality or/and integrity in hardware.<n>Traditional IFT methods struggle with scalability and adaptability, leading to tracing bottlenecks that limit applicability in large-scale hardware.<n>This paper introduces LLM-IFT that integrates large language models (LLM) for the realization of the IFT process in hardware.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As modern hardware designs grow in complexity and size, ensuring security across the confidentiality, integrity, and availability (CIA) triad becomes increasingly challenging. Information flow tracking (IFT) is a widely-used approach to tracing data propagation, identifying unauthorized activities that may compromise confidentiality or/and integrity in hardware. However, traditional IFT methods struggle with scalability and adaptability, particularly in high-density and interconnected architectures, leading to tracing bottlenecks that limit applicability in large-scale hardware. To address these limitations and show the potential of transformer-based models in integrated circuit (IC) design, this paper introduces LLM-IFT that integrates large language models (LLM) for the realization of the IFT process in hardware. LLM-IFT exploits LLM-driven structured reasoning to perform hierarchical dependency analysis, systematically breaking down even the most complex designs. Through a multi-step LLM invocation, the framework analyzes both intra-module and inter-module dependencies, enabling comprehensive IFT assessment. By focusing on a set of Trust-Hub vulnerability test cases at both the IP level and the SoC level, our experiments demonstrate a 100\% success rate in accurate IFT analysis for confidentiality and integrity checks in hardware.
Related papers
- How Robust Are Router-LLMs? Analysis of the Fragility of LLM Routing Capabilities [62.474732677086855]
Large language model (LLM) routing has emerged as a crucial strategy for balancing computational costs with performance.
We propose the DSC benchmark: Diverse, Simple, and Categorized, an evaluation framework that categorizes router performance across a broad spectrum of query types.
arXiv Detail & Related papers (2025-03-20T19:52:30Z) - Confident or Seek Stronger: Exploring Uncertainty-Based On-device LLM Routing From Benchmarking to Generalization [61.02719787737867]
Large language models (LLMs) are increasingly deployed and democratized on edge devices.
One promising solution is uncertainty-based SLM routing, offloading high-stakes queries to stronger LLMs when resulting in low-confidence responses on SLM.
We conduct a comprehensive investigation into benchmarking and generalization of uncertainty-driven routing strategies from SLMs to LLMs over 1500+ settings.
arXiv Detail & Related papers (2025-02-06T18:59:11Z) - NILE: Internal Consistency Alignment in Large Language Models [59.16120063368364]
We introduce NILE (iNternal consIstency aLignmEnt) framework, aimed at optimizing IFT datasets to unlock LLMs' capability further.<n>NILE operates by eliciting target pre-trained LLM's internal knowledge corresponding to instruction data.<n>Our experiments demonstrate that NILE-aligned IFT datasets sharply boost LLM performance across multiple ability evaluation datasets.
arXiv Detail & Related papers (2024-12-21T16:25:16Z) - System-Level Defense against Indirect Prompt Injection Attacks: An Information Flow Control Perspective [24.583984374370342]
Large Language Model-based systems (LLM systems) are information and query processing systems.
We present a system-level defense based on the principles of information flow control that we call an f-secure LLM system.
arXiv Detail & Related papers (2024-09-27T18:41:58Z) - Efficient Prompting for LLM-based Generative Internet of Things [88.84327500311464]
Large language models (LLMs) have demonstrated remarkable capacities on various tasks, and integrating the capacities of LLMs into the Internet of Things (IoT) applications has drawn much research attention recently.
Due to security concerns, many institutions avoid accessing state-of-the-art commercial LLM services, requiring the deployment and utilization of open-source LLMs in a local network setting.
We propose a LLM-based Generative IoT (GIoT) system deployed in the local network setting in this study.
arXiv Detail & Related papers (2024-06-14T19:24:00Z) - Evolutionary Large Language Models for Hardware Security: A Comparative Survey [0.4642370358223669]
This study explores the seeds of Large Language Models (LLMs) integration in register transfer level (RTL) designs.
LLMs can be harnessed to automatically rectify security-relevant vulnerabilities inherent in HW designs.
arXiv Detail & Related papers (2024-04-25T14:42:12Z) - Federated Transfer Learning with Task Personalization for Condition Monitoring in Ultrasonic Metal Welding [3.079885946230076]
This paper presents a Transfer Learning with.
Federated Task Task architecture (FTLTP) that provides data capabilities in distributed distributed learning framework.
The FTL-TP framework is readily to various other manufacturing applications.
arXiv Detail & Related papers (2024-04-20T05:31:59Z) - LLM4SecHW: Leveraging Domain Specific Large Language Model for Hardware
Debugging [4.297043877989406]
This paper presents a novel framework for hardware debug that leverages domain specific Large Language Model (LLM)
We propose a unique approach to compile a dataset of open source hardware design defects and their remediation steps.
LLM4SecHW employs fine tuning of medium sized LLMs based on this dataset, enabling the identification and rectification of bugs in hardware designs.
arXiv Detail & Related papers (2024-01-28T19:45:25Z) - Secure Instruction and Data-Level Information Flow Tracking Model for RISC-V [0.0]
Unauthorized access, fault injection, and privacy invasion are potential threats from untrusted actors.
We propose an integrated Information Flow Tracking (IFT) technique to enable runtime security to protect system integrity.
This study proposes a multi-level IFT model that integrates a hardware-based IFT technique with a gate-level-based IFT (GLIFT) technique.
arXiv Detail & Related papers (2023-11-17T02:04:07Z) - FederatedScope-LLM: A Comprehensive Package for Fine-tuning Large
Language Models in Federated Learning [70.38817963253034]
This paper first discusses these challenges of federated fine-tuning LLMs, and introduces our package FS-LLM as a main contribution.
We provide comprehensive federated parameter-efficient fine-tuning algorithm implementations and versatile programming interfaces for future extension in FL scenarios.
We conduct extensive experiments to validate the effectiveness of FS-LLM and benchmark advanced LLMs with state-of-the-art parameter-efficient fine-tuning algorithms in FL settings.
arXiv Detail & Related papers (2023-09-01T09:40:36Z) - Instruction Tuning for Large Language Models: A Survey [52.86322823501338]
We make a systematic review of the literature, including the general methodology of supervised fine-tuning (SFT)<n>We also review the potential pitfalls of SFT along with criticism against it, along with efforts pointing out current deficiencies of existing strategies.
arXiv Detail & Related papers (2023-08-21T15:35:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.