LLM-IFT: LLM-Powered Information Flow Tracking for Secure Hardware
- URL: http://arxiv.org/abs/2504.07015v1
- Date: Wed, 09 Apr 2025 16:32:13 GMT
- Title: LLM-IFT: LLM-Powered Information Flow Tracking for Secure Hardware
- Authors: Nowfel Mashnoor, Mohammad Akyash, Hadi Kamali, Kimia Azar,
- Abstract summary: Information flow tracking (IFT) is used to identify unauthorized activities that may compromise confidentiality or/and integrity in hardware.<n>Traditional IFT methods struggle with scalability and adaptability, leading to tracing bottlenecks that limit applicability in large-scale hardware.<n>This paper introduces LLM-IFT that integrates large language models (LLM) for the realization of the IFT process in hardware.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As modern hardware designs grow in complexity and size, ensuring security across the confidentiality, integrity, and availability (CIA) triad becomes increasingly challenging. Information flow tracking (IFT) is a widely-used approach to tracing data propagation, identifying unauthorized activities that may compromise confidentiality or/and integrity in hardware. However, traditional IFT methods struggle with scalability and adaptability, particularly in high-density and interconnected architectures, leading to tracing bottlenecks that limit applicability in large-scale hardware. To address these limitations and show the potential of transformer-based models in integrated circuit (IC) design, this paper introduces LLM-IFT that integrates large language models (LLM) for the realization of the IFT process in hardware. LLM-IFT exploits LLM-driven structured reasoning to perform hierarchical dependency analysis, systematically breaking down even the most complex designs. Through a multi-step LLM invocation, the framework analyzes both intra-module and inter-module dependencies, enabling comprehensive IFT assessment. By focusing on a set of Trust-Hub vulnerability test cases at both the IP level and the SoC level, our experiments demonstrate a 100\% success rate in accurate IFT analysis for confidentiality and integrity checks in hardware.
Related papers
- On the Surprising Efficacy of LLMs for Penetration-Testing [3.11537581064266]
The paper thoroughly reviews the evolution of Large Language Models (LLMs) in penetration testing.<n>It showcases their application across various offensive security tasks and covering broader phases of the cyber kill chain.<n>The paper identifies and discusses significant obstacles impeding wider adoption and safe deployment.
arXiv Detail & Related papers (2025-07-01T15:01:18Z) - Do LLMs Dream of Discrete Algorithms? [0.7646713951724011]
Large Language Models (LLMs) have rapidly transformed the landscape of artificial intelligence.<n>Their reliance on probabilistic inference limits their effectiveness in domains requiring strict logical reasoning.<n>This paper proposes a neurosymbolic approach that augments LLMs with logic-based reasoning modules.
arXiv Detail & Related papers (2025-06-29T22:03:01Z) - Zero-Trust Foundation Models: A New Paradigm for Secure and Collaborative Artificial Intelligence for Internet of Things [61.43014629640404]
Zero-Trust Foundation Models (ZTFMs) embed zero-trust security principles into the lifecycle of foundation models (FMs) for Internet of Things (IoT) systems.<n>ZTFMs can enable secure, privacy-preserving AI across distributed, heterogeneous, and potentially adversarial IoT environments.
arXiv Detail & Related papers (2025-05-26T06:44:31Z) - A Weighted Byzantine Fault Tolerance Consensus Driven Trusted Multiple Large Language Models Network [53.37983409425452]
Large Language Models (LLMs) have achieved remarkable success across a wide range of applications.<n>Recently, collaborative frameworks such as the Multi-LLM Network (MultiLLMN) have been introduced.<n>We propose a novel Trusted MultiLLMN framework driven by a weighted Byzantine Fault Tolerance (WBFT) blockchain consensus mechanism.
arXiv Detail & Related papers (2025-05-08T10:04:41Z) - A Trustworthy Multi-LLM Network: Challenges,Solutions, and A Use Case [59.58213261128626]
We propose a blockchain-enabled collaborative framework that connects multiple Large Language Models (LLMs) into a Trustworthy Multi-LLM Network (MultiLLMN)<n>This architecture enables the cooperative evaluation and selection of the most reliable and high-quality responses to complex network optimization problems.
arXiv Detail & Related papers (2025-05-06T05:32:46Z) - GateLens: A Reasoning-Enhanced LLM Agent for Automotive Software Release Analytics [9.549568621873386]
GateLens is an LLM-based system for analyzing data in the automotive domain.<n>Unlike traditional multi-agent or planning-based systems that can be slow, opaque, and costly to maintain, GateLens emphasizes speed, transparency, and reliability.
arXiv Detail & Related papers (2025-03-27T17:48:32Z) - How Robust Are Router-LLMs? Analysis of the Fragility of LLM Routing Capabilities [62.474732677086855]
Large language model (LLM) routing has emerged as a crucial strategy for balancing computational costs with performance.
We propose the DSC benchmark: Diverse, Simple, and Categorized, an evaluation framework that categorizes router performance across a broad spectrum of query types.
arXiv Detail & Related papers (2025-03-20T19:52:30Z) - Confident or Seek Stronger: Exploring Uncertainty-Based On-device LLM Routing From Benchmarking to Generalization [61.02719787737867]
Large language models (LLMs) are increasingly deployed and democratized on edge devices.
One promising solution is uncertainty-based SLM routing, offloading high-stakes queries to stronger LLMs when resulting in low-confidence responses on SLM.
We conduct a comprehensive investigation into benchmarking and generalization of uncertainty-driven routing strategies from SLMs to LLMs over 1500+ settings.
arXiv Detail & Related papers (2025-02-06T18:59:11Z) - NILE: Internal Consistency Alignment in Large Language Models [59.16120063368364]
We introduce NILE (iNternal consIstency aLignmEnt) framework, aimed at optimizing IFT datasets to unlock LLMs' capability further.<n>NILE operates by eliciting target pre-trained LLM's internal knowledge corresponding to instruction data.<n>Our experiments demonstrate that NILE-aligned IFT datasets sharply boost LLM performance across multiple ability evaluation datasets.
arXiv Detail & Related papers (2024-12-21T16:25:16Z) - System-Level Defense against Indirect Prompt Injection Attacks: An Information Flow Control Perspective [24.583984374370342]
Large Language Model-based systems (LLM systems) are information and query processing systems.
We present a system-level defense based on the principles of information flow control that we call an f-secure LLM system.
arXiv Detail & Related papers (2024-09-27T18:41:58Z) - Efficient Prompting for LLM-based Generative Internet of Things [88.84327500311464]
Large language models (LLMs) have demonstrated remarkable capacities on various tasks, and integrating the capacities of LLMs into the Internet of Things (IoT) applications has drawn much research attention recently.
Due to security concerns, many institutions avoid accessing state-of-the-art commercial LLM services, requiring the deployment and utilization of open-source LLMs in a local network setting.
We propose a LLM-based Generative IoT (GIoT) system deployed in the local network setting in this study.
arXiv Detail & Related papers (2024-06-14T19:24:00Z) - Evolutionary Large Language Models for Hardware Security: A Comparative Survey [0.4642370358223669]
This study explores the seeds of Large Language Models (LLMs) integration in register transfer level (RTL) designs.
LLMs can be harnessed to automatically rectify security-relevant vulnerabilities inherent in HW designs.
arXiv Detail & Related papers (2024-04-25T14:42:12Z) - Federated Transfer Learning with Task Personalization for Condition Monitoring in Ultrasonic Metal Welding [3.079885946230076]
This paper presents a Transfer Learning with.
Federated Task Task architecture (FTLTP) that provides data capabilities in distributed distributed learning framework.
The FTL-TP framework is readily to various other manufacturing applications.
arXiv Detail & Related papers (2024-04-20T05:31:59Z) - LLM4SecHW: Leveraging Domain Specific Large Language Model for Hardware
Debugging [4.297043877989406]
This paper presents a novel framework for hardware debug that leverages domain specific Large Language Model (LLM)
We propose a unique approach to compile a dataset of open source hardware design defects and their remediation steps.
LLM4SecHW employs fine tuning of medium sized LLMs based on this dataset, enabling the identification and rectification of bugs in hardware designs.
arXiv Detail & Related papers (2024-01-28T19:45:25Z) - Secure Instruction and Data-Level Information Flow Tracking Model for RISC-V [0.0]
Unauthorized access, fault injection, and privacy invasion are potential threats from untrusted actors.
We propose an integrated Information Flow Tracking (IFT) technique to enable runtime security to protect system integrity.
This study proposes a multi-level IFT model that integrates a hardware-based IFT technique with a gate-level-based IFT (GLIFT) technique.
arXiv Detail & Related papers (2023-11-17T02:04:07Z) - FederatedScope-LLM: A Comprehensive Package for Fine-tuning Large
Language Models in Federated Learning [70.38817963253034]
This paper first discusses these challenges of federated fine-tuning LLMs, and introduces our package FS-LLM as a main contribution.
We provide comprehensive federated parameter-efficient fine-tuning algorithm implementations and versatile programming interfaces for future extension in FL scenarios.
We conduct extensive experiments to validate the effectiveness of FS-LLM and benchmark advanced LLMs with state-of-the-art parameter-efficient fine-tuning algorithms in FL settings.
arXiv Detail & Related papers (2023-09-01T09:40:36Z) - Instruction Tuning for Large Language Models: A Survey [52.86322823501338]
We make a systematic review of the literature, including the general methodology of supervised fine-tuning (SFT)<n>We also review the potential pitfalls of SFT along with criticism against it, along with efforts pointing out current deficiencies of existing strategies.
arXiv Detail & Related papers (2023-08-21T15:35:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.