Natural Language to Verilog: Design of a Recurrent Spiking Neural Network using Large Language Models and ChatGPT
- URL: http://arxiv.org/abs/2405.01419v3
- Date: Tue, 22 Oct 2024 18:31:22 GMT
- Title: Natural Language to Verilog: Design of a Recurrent Spiking Neural Network using Large Language Models and ChatGPT
- Authors: Paola Vitolo, George Psaltakis, Michael Tomlinson, Gian Domenico Licciardo, Andreas G. Andreou,
- Abstract summary: We employ OpenAI's ChatGPT4 and natural language prompts to generate hardware description code, namely Verilog.
The resultant design was validated in three simple machine learning tasks, the exclusive OR, the IRIS flower classification and the MNIST hand-written digit classification.
The design was submitted to Efabless Tiny Tapeout 6.
- Score: 0.08388591755871733
- License:
- Abstract: This paper investigates the use of Large Language Models (LLMs) and natural language prompts to generate hardware description code, namely Verilog. Building on our prior work, we employ OpenAI's ChatGPT4 and natural language prompts to synthesize an RTL Verilog module of a programmable recurrent spiking neural network, while also generating test benches to assess the system's correctness. The resultant design was validated in three simple machine learning tasks, the exclusive OR, the IRIS flower classification and the MNIST hand-written digit classification. Furthermore, the design was validated on a Field-Programmable Gate Array (FPGA) and subsequently synthesized in the SkyWater 130 nm technology by using an open-source electronic design automation flow. The design was submitted to Efabless Tiny Tapeout 6.
Related papers
- Natural language is not enough: Benchmarking multi-modal generative AI for Verilog generation [37.309663295844835]
We introduce an open-source benchmark for multi-modal generative models tailored for Verilog synthesis from visual-linguistic inputs.
We also introduce an open-source visual and natural language Verilog query language framework.
Our results demonstrate a significant improvement in the multi-modal generated Verilog compared to queries based solely on natural language.
arXiv Detail & Related papers (2024-07-11T13:10:09Z) - CodeGRAG: Bridging the Gap between Natural Language and Programming Language via Graphical Retrieval Augmented Generation [58.84212778960507]
We propose CodeGRAG, a Graphical Retrieval Augmented Code Generation framework to enhance the performance of LLMs.
CodeGRAG builds the graphical view of code blocks based on the control flow and data flow of them to fill the gap between programming languages and natural language.
Various experiments and ablations are done on four datasets including both the C++ and python languages to validate the hard meta-graph prompt, the soft prompting technique, and the effectiveness of the objectives for pretrained GNN expert.
arXiv Detail & Related papers (2024-05-03T02:48:55Z) - Designing Silicon Brains using LLM: Leveraging ChatGPT for Automated
Description of a Spiking Neuron Array [1.137846619087643]
We present the prompts used to guide ChatGPT4 to produce a synthesizable and functional verilog description for a programmable Spiking Neuron Array ASIC.
This design flow showcases the current state of using ChatGPT4 for natural language driven hardware design.
arXiv Detail & Related papers (2024-01-25T21:21:38Z) - In-Context Language Learning: Architectures and Algorithms [73.93205821154605]
We study ICL through the lens of a new family of model problems we term in context language learning (ICLL)
We evaluate a diverse set of neural sequence models on regular ICLL tasks.
arXiv Detail & Related papers (2024-01-23T18:59:21Z) - Zero-Shot RTL Code Generation with Attention Sink Augmented Large
Language Models [0.0]
This paper discusses the possibility of exploiting large language models to streamline the code generation process in hardware design.
The ability to use large language models on RTL code generation not only expedites design cycles but also facilitates the exploration of design spaces.
arXiv Detail & Related papers (2024-01-12T17:41:38Z) - Neural Markov Prolog [57.13568543360899]
We propose the language Neural Markov Prolog (NMP) as a means to bridge first order logic and neural network design.
NMP allows for the easy generation and presentation of architectures for images, text, relational databases, or other target data types.
arXiv Detail & Related papers (2023-11-27T21:41:47Z) - ChipGPT: How far are we from natural language hardware design [34.22592995908168]
This work attempts to demonstrate an automated design environment that explores LLMs to generate hardware logic designs from natural language specifications.
We present a scalable four-stage zero-code logic design framework based on LLMs without retraining or finetuning.
arXiv Detail & Related papers (2023-05-23T12:54:02Z) - DAVE: Deriving Automatically Verilog from English [18.018512051180714]
Engineers undertake significant efforts to translate specifications for digital systems into programming languages.
We explore the use of state-of-the-art machine learning (ML) to automatically derive Verilog snippets from English.
arXiv Detail & Related papers (2020-08-27T15:25:03Z) - Exploring Software Naturalness through Neural Language Models [56.1315223210742]
The Software Naturalness hypothesis argues that programming languages can be understood through the same techniques used in natural language processing.
We explore this hypothesis through the use of a pre-trained transformer-based language model to perform code analysis tasks.
arXiv Detail & Related papers (2020-06-22T21:56:14Z) - CodeBERT: A Pre-Trained Model for Programming and Natural Languages [117.34242908773061]
CodeBERT is a pre-trained model for programming language (PL) and nat-ural language (NL)
We develop CodeBERT with Transformer-based neural architecture.
We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters.
arXiv Detail & Related papers (2020-02-19T13:09:07Z) - Synthetic Datasets for Neural Program Synthesis [66.20924952964117]
We propose a new methodology for controlling and evaluating the bias of synthetic data distributions over both programs and specifications.
We demonstrate, using the Karel DSL and a small Calculator DSL, that training deep networks on these distributions leads to improved cross-distribution generalization performance.
arXiv Detail & Related papers (2019-12-27T21:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.