FireGuard: A Generalized Microarchitecture for Fine-Grained Security Analysis on OoO Superscalar Cores
- URL: http://arxiv.org/abs/2504.01380v1
- Date: Wed, 02 Apr 2025 05:49:22 GMT
- Title: FireGuard: A Generalized Microarchitecture for Fine-Grained Security Analysis on OoO Superscalar Cores
- Authors: Zhe Jiang, Sam Ainsworth, Timothy Jones,
- Abstract summary: Generic programmable support for fine-grained instruction analysis is a building block for the security of future processors.<n>We introduce FireGuard, the first implementation of fine-grained instruction analysis on a real OoO superscalar processor.<n> Experiments show that our solution simultaneously ensures both security and performance of the system, with parallel scalability.
- Score: 6.143617408446782
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High-performance security guarantees rely on hardware support. Generic programmable support for fine-grained instruction analysis has gained broad interest in the literature as a fundamental building block for the security of future processors. Yet, implementation in real out-of-order (OoO) superscalar processors presents tough challenges that cannot be explored in highly abstract simulators. We detail the challenges of implementing complex programmable pathways without critical paths or contention. We then introduce FireGuard, the first implementation of fine-grained instruction analysis on a real OoO superscalar processor. We establish an end-to-end system, including microarchitecture, SoC, ISA and programming model. Experiments show that our solution simultaneously ensures both security and performance of the system, with parallel scalability. We examine the feasibility of building FireGuard into modern SoCs: Apple's M1-Pro, Huawei's Kirin-960, and Intel's i7-12700F, where less than 1% silicon area is introduced. The Repo. of FireGuard's source code: https://github.com/SEU-ACAL/reproduce-FireGuard-DAC-25.
Related papers
- VulGuard: An Unified Tool for Evaluating Just-In-Time Vulnerability Prediction Models [3.4299920908334673]
VulGuard is an automated tool designed to streamline the extraction, processing, and analysis of commits from GitHub repositories for vulnerability prediction (JIT-VP) research.<n>It automatically mines commit histories, extracts fine-grained code changes, commit messages, and software engineering metrics, and formats them for downstream analysis.
arXiv Detail & Related papers (2025-07-22T15:18:44Z) - Security Enclave Architecture for Heterogeneous Security Primitives for Supply-Chain Attacks [7.9837065464150685]
This paper explores the range of obstacles encountered when building a unified security architecture capable of addressing multiple attack vectors.<n>We present a thorough evaluation of its impact on silicon area and power consumption across various ASIC technologies.
arXiv Detail & Related papers (2025-07-15T04:28:02Z) - Decompiling Smart Contracts with a Large Language Model [51.49197239479266]
Despite Etherscan's 78,047,845 smart contracts deployed on (as of May 26, 2025), a mere 767,520 ( 1%) are open source.<n>This opacity necessitates the automated semantic analysis of on-chain smart contract bytecode.<n>We introduce a pioneering decompilation pipeline that transforms bytecode into human-readable and semantically faithful Solidity code.
arXiv Detail & Related papers (2025-06-24T13:42:59Z) - Why Not Act on What You Know? Unleashing Safety Potential of LLMs via Self-Aware Guard Enhancement [48.50995874445193]
Large Language Models (LLMs) have shown impressive capabilities across various tasks but remain vulnerable to meticulously crafted jailbreak attacks.<n>We propose SAGE (Self-Aware Guard Enhancement), a training-free defense strategy designed to align LLMs' strong safety discrimination performance with their relatively weaker safety generation ability.
arXiv Detail & Related papers (2025-05-17T15:54:52Z) - DoomArena: A framework for Testing AI Agents Against Evolving Security Threats [84.94654617852322]
We present DoomArena, a security evaluation framework for AI agents.
It is a plug-in framework and integrates easily into realistic agentic frameworks.
It is modular and decouples the development of attacks from details of the environment in which the agent is deployed.
arXiv Detail & Related papers (2025-04-18T20:36:10Z) - AutoAdvExBench: Benchmarking autonomous exploitation of adversarial example defenses [66.87883360545361]
AutoAdvExBench is a benchmark to evaluate if large language models (LLMs) can autonomously exploit defenses to adversarial examples.<n>We design a strong agent that is capable of breaking 75% of CTF-like ("homework exercise") adversarial example defenses.<n>We show that this agent is only able to succeed on 13% of the real-world defenses in our benchmark, indicating the large gap between difficulty in attacking "real" code, and CTF-like code.
arXiv Detail & Related papers (2025-03-03T18:39:48Z) - Instructional Segment Embedding: Improving LLM Safety with Instruction Hierarchy [53.54777131440989]
Large Language Models (LLMs) are susceptible to security and safety threats.
One major cause of these vulnerabilities is the lack of an instruction hierarchy.
We introduce the instructional segment Embedding (ISE) technique, inspired by BERT, to modern large language models.
arXiv Detail & Related papers (2024-10-09T12:52:41Z) - PrimeGuard: Safe and Helpful LLMs through Tuning-Free Routing [1.474945380093949]
Inference-Time Guardrails (ITG) offer solutions that shift model output distributions towards compliance.
Current methods struggle in balancing safety with helpfulness.
We propose PrimeGuard, a novel ITG method that utilizes structured control flow.
arXiv Detail & Related papers (2024-07-23T09:14:27Z) - SecScale: A Scalable and Secure Trusted Execution Environment for Servers [0.36868085124383626]
Intel plans to deprecate its most trustworthy enclave, SGX, on its 11th and 12th generation processors.
We propose SecScale that uses some new ideas centered around speculative execution.
We show that we are 10% faster than the nearest competing alternative.
arXiv Detail & Related papers (2024-07-18T15:14:36Z) - GuardAgent: Safeguard LLM Agents by a Guard Agent via Knowledge-Enabled Reasoning [79.07152553060601]
We propose GuardAgent, the first guardrail agent to protect the target agents by dynamically checking whether their actions satisfy given safety guard requests.<n>Specifically, GuardAgent first analyzes the safety guard requests to generate a task plan, and then maps this plan into guardrail code for execution.<n>We show that GuardAgent effectively moderates the violation actions for different types of agents on two benchmarks with over 98% and 83% guardrail accuracies.
arXiv Detail & Related papers (2024-06-13T14:49:26Z) - JustSTART: How to Find an RSA Authentication Bypass on Xilinx UltraScale(+) with Fuzzing [12.338137154105034]
We investigate fuzzing for 7-Series and UltraScale(+) FPGA configuration engines.
Our goal is to examine the effectiveness of fuzzing to analyze and document the inner workings of FPGA configuration engines.
arXiv Detail & Related papers (2024-02-15T10:03:35Z) - Abusing Processor Exception for General Binary Instrumentation on Bare-metal Embedded Devices [11.520387655426521]
PIFER (Practical Instrumenting Framework for Embedded fiRmware) enables general and fine-grained static binary instrumentation for embedded bare-metal firmware.
We propose an instruction translation-based scheme to guarantee the correct execution of the original firmware after patching.
arXiv Detail & Related papers (2023-11-28T05:32:20Z) - A LLM Assisted Exploitation of AI-Guardian [57.572998144258705]
We evaluate the robustness of AI-Guardian, a recent defense to adversarial examples published at IEEE S&P 2023.
We write none of the code to attack this model, and instead prompt GPT-4 to implement all attack algorithms following our instructions and guidance.
This process was surprisingly effective and efficient, with the language model at times producing code from ambiguous instructions faster than the author of this paper could have done.
arXiv Detail & Related papers (2023-07-20T17:33:25Z) - Citadel: Simple Spectre-Safe Isolation For Real-World Programs That Share Memory [8.414722884952525]
We introduce a new security property we call relaxed microarchitectural isolation (RMI)<n>RMI allows sensitive programs that are not-constant-time to share memory with an attacker while restricting the information leakage to that of non-speculative execution.<n>Our end-to-end prototype, Citadel, consists of an FPGA-based multicore processor that boots Linux and runs secure applications.
arXiv Detail & Related papers (2023-06-26T17:51:23Z) - Evil from Within: Machine Learning Backdoors through Hardware Trojans [51.81518799463544]
Backdoors pose a serious threat to machine learning, as they can compromise the integrity of security-critical systems, such as self-driving cars.<n>We introduce a backdoor attack that completely resides within a common hardware accelerator for machine learning.<n>We demonstrate the practical feasibility of our attack by implanting our hardware trojan into the Xilinx Vitis AI DPU.
arXiv Detail & Related papers (2023-04-17T16:24:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.