Accelerating Hardware Verification with Graph Models
- URL: http://arxiv.org/abs/2412.13374v2
- Date: Thu, 02 Jan 2025 14:12:28 GMT
- Title: Accelerating Hardware Verification with Graph Models
- Authors: Raghul Saravanan, Sreenitha Kasarapu, Sai Manoj Pudukotai Dinakarrao,
- Abstract summary: We introduce GraphFuzz, a graph-based hardware fuzzer for gate-level netlist verification.
In this approach, hardware designs are modeled as graph nodes, with gate behaviors encoded as features.
By leveraging graph learning algorithms, GraphFuzz efficiently detects hardware vulnerabilities by analyzing node patterns.
- Score: 0.43988112145759295
- License:
- Abstract: The increasing complexity of modern processor and IP designs presents significant challenges in identifying and mitigating hardware flaws early in the IC design cycle. Traditional hardware fuzzing techniques, inspired by software testing, have shown promise but face scalability issues, especially at the gate-level netlist where bugs introduced during synthesis are often missed by RTL-level verification due to longer simulation times. To address this, we introduce GraphFuzz, a graph-based hardware fuzzer designed for gate-level netlist verification. In this approach, hardware designs are modeled as graph nodes, with gate behaviors encoded as features. By leveraging graph learning algorithms, GraphFuzz efficiently detects hardware vulnerabilities by analyzing node patterns. Our evaluation across benchmark circuits and open-source processors demonstrates an average prediction accuracy of 80% and bug detection accuracy of 70%, highlighting the potential of graph-based methods for enhancing hardware verification.
Related papers
- Keep It Simple: Towards Accurate Vulnerability Detection for Large Code Graphs [6.236203127696138]
We propose a novel vulnerability detection method, ANGLE, which embodies the hierarchical graph refinement and context-aware graph representation learning.
Our method significantly outperforms several other baselines in terms of the accuracy and F1 score.
In large code graphs, ANGLE achieves an improvement in accuracy of 34.27%-161.93% compared to the state-of-the-art method, AMPLE.
arXiv Detail & Related papers (2024-12-13T14:27:51Z) - Explainable Malware Detection through Integrated Graph Reduction and Learning Techniques [2.464148828287322]
Control Flow Graphs and Function Call Graphs have become pivotal in providing a detailed understanding of program execution.
These graph-based representations, when combined with Graph Neural Networks (GNN), have shown promise in developing high-performance malware detectors.
This paper addresses these issues by developing several graph reduction techniques to reduce graph size and applying the state-of-the-art GNNExplainer to enhance the interpretability of GNN outputs.
arXiv Detail & Related papers (2024-12-04T18:59:45Z) - Scalable Defect Detection via Traversal on Code Graph [10.860910384163892]
We introduce QVoG, a graph-based static analysis platform for detecting defects and vulnerabilities.
It employs a compressed CPG representation to maintain a reasonable graph size, thereby enhancing the overall query efficiency.
For projects consisting of 1,000,000+ lines of code, QVoG can complete analysis in approximately 15 minutes, as opposed to 19 minutes with CodeQL.
arXiv Detail & Related papers (2024-06-12T11:24:52Z) - Multitask Active Learning for Graph Anomaly Detection [48.690169078479116]
We propose a novel MultItask acTIve Graph Anomaly deTEction framework, namely MITIGATE.
By coupling node classification tasks, MITIGATE obtains the capability to detect out-of-distribution nodes without known anomalies.
Empirical studies on four datasets demonstrate that MITIGATE significantly outperforms the state-of-the-art methods for anomaly detection.
arXiv Detail & Related papers (2024-01-24T03:43:45Z) - ADA-GAD: Anomaly-Denoised Autoencoders for Graph Anomaly Detection [84.0718034981805]
We introduce a novel framework called Anomaly-Denoised Autoencoders for Graph Anomaly Detection (ADA-GAD)
In the first stage, we design a learning-free anomaly-denoised augmentation method to generate graphs with reduced anomaly levels.
In the next stage, the decoders are retrained for detection on the original graph.
arXiv Detail & Related papers (2023-12-22T09:02:01Z) - MultiScale MeshGraphNets [65.26373813797409]
We propose two complementary approaches to improve the framework from MeshGraphNets.
First, we demonstrate that it is possible to learn accurate surrogate dynamics of a high-resolution system on a much coarser mesh.
Second, we introduce a hierarchical approach (MultiScale MeshGraphNets) which passes messages on two different resolutions.
arXiv Detail & Related papers (2022-10-02T20:16:20Z) - Deep Fraud Detection on Non-attributed Graph [61.636677596161235]
Graph Neural Networks (GNNs) have shown solid performance on fraud detection.
labeled data is scarce in large-scale industrial problems, especially for fraud detection.
We propose a novel graph pre-training strategy to leverage more unlabeled data.
arXiv Detail & Related papers (2021-10-04T03:42:09Z) - Software Vulnerability Detection via Deep Learning over Disaggregated
Code Graph Representation [57.92972327649165]
This work explores a deep learning approach to automatically learn the insecure patterns from code corpora.
Because code naturally admits graph structures with parsing, we develop a novel graph neural network (GNN) to exploit both the semantic context and structural regularity of a program.
arXiv Detail & Related papers (2021-09-07T21:24:36Z) - HW2VEC: A Graph Learning Tool for Automating Hardware Security [4.188344897982036]
We propose HW2VEC, an open-source graph learning tool for hardware security applications.
We show that HW2VEC can achieve state-of-the-art performance on two hardware security-related tasks: Hardware Trojan Detection and Intellectual Property Piracy Detection.
arXiv Detail & Related papers (2021-07-26T17:03:51Z) - Very Deep Graph Neural Networks Via Noise Regularisation [57.450532911995516]
Graph Neural Networks (GNNs) perform learned message passing over an input graph.
We train a deep GNN with up to 100 message passing steps and achieve several state-of-the-art results.
arXiv Detail & Related papers (2021-06-15T08:50:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.