Verilog-to-PyG -- A Framework for Graph Learning and Augmentation on RTL
Designs
- URL: http://arxiv.org/abs/2311.05722v1
- Date: Thu, 9 Nov 2023 20:11:40 GMT
- Title: Verilog-to-PyG -- A Framework for Graph Learning and Augmentation on RTL
Designs
- Authors: Yingjie Li and Mingju Liu and Alan Mishchenko and Cunxi Yu
- Abstract summary: We introduce an innovative open-source framework that translates RTL designs into graph representation foundations.
The Verilog-to-PyG (V2PYG) framework is compatible with the open-source Electronic Design Automation (EDA) toolchain OpenROAD.
We will present novel RTL data augmentation methods that enable functional equivalent design augmentation for the construction of an extensive graph-based RTL design database.
- Score: 15.67829950106923
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The complexity of modern hardware designs necessitates advanced methodologies
for optimizing and analyzing modern digital systems. In recent times, machine
learning (ML) methodologies have emerged as potent instruments for assessing
design quality-of-results at the Register-Transfer Level (RTL) or Boolean
level, aiming to expedite design exploration of advanced RTL configurations. In
this presentation, we introduce an innovative open-source framework that
translates RTL designs into graph representation foundations, which can be
seamlessly integrated with the PyTorch Geometric graph learning platform.
Furthermore, the Verilog-to-PyG (V2PYG) framework is compatible with the
open-source Electronic Design Automation (EDA) toolchain OpenROAD, facilitating
the collection of labeled datasets in an utterly open-source manner.
Additionally, we will present novel RTL data augmentation methods (incorporated
in our framework) that enable functional equivalent design augmentation for the
construction of an extensive graph-based RTL design database. Lastly, we will
showcase several using cases of V2PYG with detailed scripting examples. V2PYG
can be found at \url{https://yu-maryland.github.io/Verilog-to-PyG/}.
Related papers
- RepoGraph: Enhancing AI Software Engineering with Repository-level Code Graph [63.87660059104077]
We present RepoGraph, a plug-in module that manages a repository-level structure for modern AI software engineering solutions.
RepoGraph substantially boosts the performance of all systems, leading to a new state-of-the-art among open-source frameworks.
arXiv Detail & Related papers (2024-10-03T05:45:26Z) - Customized Information and Domain-centric Knowledge Graph Construction with Large Language Models [0.0]
We propose a novel approach based on knowledge graphs to provide timely access to structured information.
Our framework encompasses a text mining process, which includes information retrieval, keyphrase extraction, semantic network creation, and topic map visualization.
We apply our methodology to the domain of automotive electrical systems to demonstrate the approach, which is scalable.
arXiv Detail & Related papers (2024-09-30T07:08:28Z) - Towards Lightweight Graph Neural Network Search with Curriculum Graph Sparsification [48.334100429553644]
This paper proposes to design a joint graph data and architecture mechanism, which identifies important sub-architectures via the valuable graph data.
To search for optimal lightweight Graph Neural Networks (GNNs), we propose a Lightweight Graph Neural Architecture Search with Graph SparsIfication and Network Pruning (GASSIP) method.
Our method achieves on-par or even higher node classification performance with half or fewer model parameters of searched GNNs and a sparser graph.
arXiv Detail & Related papers (2024-06-24T06:53:37Z) - PosterLLaVa: Constructing a Unified Multi-modal Layout Generator with LLM [58.67882997399021]
Our research introduces a unified framework for automated graphic layout generation.
Our data-driven method employs structured text (JSON format) and visual instruction tuning to generate layouts.
We conduct extensive experiments and achieved state-of-the-art (SOTA) performance on public multi-modal layout generation benchmarks.
arXiv Detail & Related papers (2024-06-05T03:05:52Z) - Automatic Graph Topology-Aware Transformer [50.2807041149784]
We build a comprehensive graph Transformer search space with the micro-level and macro-level designs.
EGTAS evolves graph Transformer topologies at the macro level and graph-aware strategies at the micro level.
We demonstrate the efficacy of EGTAS across a range of graph-level and node-level tasks.
arXiv Detail & Related papers (2024-05-30T07:44:31Z) - Zero-Shot RTL Code Generation with Attention Sink Augmented Large
Language Models [0.0]
This paper discusses the possibility of exploiting large language models to streamline the code generation process in hardware design.
The ability to use large language models on RTL code generation not only expedites design cycles but also facilitates the exploration of design spaces.
arXiv Detail & Related papers (2024-01-12T17:41:38Z) - Open-source FPGA-ML codesign for the MLPerf Tiny Benchmark [11.575901540758574]
We present our development experience for the Tiny Inference Benchmark on field-programmable gate array (FPGA) platforms.
We use the open-source hls4ml and FINN perJ, which aim to democratize AI- hardware codesign of optimized neural networks on FPGAs.
The solutions are deployed on system-on-chip (Pynq-Z2) and pure FPGA (Arty A7-100T) platforms.
arXiv Detail & Related papers (2022-06-23T15:57:17Z) - CFU Playground: Full-Stack Open-Source Framework for Tiny Machine
Learning (tinyML) Acceleration on FPGAs [2.2177069086277195]
CFU Playground is a full-stack open-source framework that enables rapid and iterative design of machine learning (ML) accelerators for embedded ML systems.
Our tool provides a completely open-source end-to-end flow for hardware-software co-design on FPGAs and future systems research.
Our rapid, deploy-profile-optimization feedback loop lets ML hardware and software developers achieve significant returns out of a relatively small investment.
arXiv Detail & Related papers (2022-01-05T23:15:58Z) - Graph signal processing for machine learning: A review and new
perspectives [57.285378618394624]
We review a few important contributions made by GSP concepts and tools, such as graph filters and transforms, to the development of novel machine learning algorithms.
We discuss exploiting data structure and relational priors, improving data and computational efficiency, and enhancing model interpretability.
We provide new perspectives on future development of GSP techniques that may serve as a bridge between applied mathematics and signal processing on one side, and machine learning and network science on the other.
arXiv Detail & Related papers (2020-07-31T13:21:33Z) - Benchmarking Graph Neural Networks [75.42159546060509]
Graph neural networks (GNNs) have become the standard toolkit for analyzing and learning from data on graphs.
For any successful field to become mainstream and reliable, benchmarks must be developed to quantify progress.
GitHub repository has reached 1,800 stars and 339 forks, which demonstrates the utility of the proposed open-source framework.
arXiv Detail & Related papers (2020-03-02T15:58:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.