Leveraging High-Level Synthesis and Large Language Models to Generate,
Simulate, and Deploy a Uniform Random Number Generator Hardware Design
- URL: http://arxiv.org/abs/2311.03489v4
- Date: Fri, 5 Jan 2024 23:56:40 GMT
- Title: Leveraging High-Level Synthesis and Large Language Models to Generate,
Simulate, and Deploy a Uniform Random Number Generator Hardware Design
- Authors: James T. Meech
- Abstract summary: We present a new high-level synthesis methodology for using large language model tools to generate hardware designs.
As a case study, we use our methodology to generate a permuted congruential random number generator design with a wishbone interface.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a new high-level synthesis methodology for using large language
model tools to generate hardware designs. The methodology uses exclusively
open-source tools excluding the large language model. As a case study, we use
our methodology to generate a permuted congruential random number generator
design with a wishbone interface. We verify the functionality and quality of
the random number generator design using large language model-generated
simulations and the Dieharder randomness test suite. We document all the large
language model chat logs, Python scripts, Verilog scripts, and simulation
results used in the case study. We believe that our method of hardware design
generation coupled with the open source silicon 130 nm design tools will
revolutionize application-specific integrated circuit design. Our methodology
significantly lowers the bar to entry when building domain-specific computing
accelerators for the Internet of Things and proof of concept prototypes for
later fabrication in more modern process nodes.
Related papers
- Natural language is not enough: Benchmarking multi-modal generative AI for Verilog generation [37.309663295844835]
We introduce an open-source benchmark for multi-modal generative models tailored for Verilog synthesis from visual-linguistic inputs.
We also introduce an open-source visual and natural language Verilog query language framework.
Our results demonstrate a significant improvement in the multi-modal generated Verilog compared to queries based solely on natural language.
arXiv Detail & Related papers (2024-07-11T13:10:09Z) - Natural Language to Verilog: Design of a Recurrent Spiking Neural Network using Large Language Models and ChatGPT [0.08388591755871733]
We employ OpenAI's ChatGPT4 and natural language prompts to synthesize a RTL Verilog module of a programmable recurrent spiking neural network.
The resultant design was validated in three case studies, the exclusive OR, the IRIS flower classification and the MNIST hand-written digit classification, achieving accuracies of up to 96.6%.
arXiv Detail & Related papers (2024-05-02T16:08:08Z) - Zero-Shot RTL Code Generation with Attention Sink Augmented Large
Language Models [0.0]
This paper discusses the possibility of exploiting large language models to streamline the code generation process in hardware design.
The ability to use large language models on RTL code generation not only expedites design cycles but also facilitates the exploration of design spaces.
arXiv Detail & Related papers (2024-01-12T17:41:38Z) - Multi-lingual Evaluation of Code Generation Models [82.7357812992118]
We present new benchmarks on evaluation code generation models: MBXP and Multilingual HumanEval, and MathQA-X.
These datasets cover over 10 programming languages.
We are able to assess the performance of code generation models in a multi-lingual fashion.
arXiv Detail & Related papers (2022-10-26T17:17:06Z) - Code Generation Tools (Almost) for Free? A Study of Few-Shot,
Pre-Trained Language Models on Code [13.15617135394116]
Few-shot learning with large-scale, pre-trained language models is a powerful way to answer questions about code.
This paper studies to what extent a state-of-the-art, pre-trained language model of code, Codex, may serve this purpose.
arXiv Detail & Related papers (2022-06-02T23:15:42Z) - Summarize and Generate to Back-translate: Unsupervised Translation of
Programming Languages [86.08359401867577]
Back-translation is widely known for its effectiveness for neural machine translation when little to no parallel data is available.
We propose performing back-translation via code summarization and generation.
We show that our proposed approach performs competitively with state-of-the-art methods.
arXiv Detail & Related papers (2022-05-23T08:20:41Z) - Twist Decoding: Diverse Generators Guide Each Other [116.20780037268801]
We introduce Twist decoding, a simple and general inference algorithm that generates text while benefiting from diverse models.
Our method does not assume the vocabulary, tokenization or even generation order is shared.
arXiv Detail & Related papers (2022-05-19T01:27:53Z) - Learning Sparse Prototypes for Text Generation [120.38555855991562]
Prototype-driven text generation is inefficient at test time as a result of needing to store and index the entire training corpus.
We propose a novel generative model that automatically learns a sparse prototype support set that achieves strong language modeling performance.
In experiments, our model outperforms previous prototype-driven language models while achieving up to a 1000x memory reduction.
arXiv Detail & Related papers (2020-06-29T19:41:26Z) - Exploring Software Naturalness through Neural Language Models [56.1315223210742]
The Software Naturalness hypothesis argues that programming languages can be understood through the same techniques used in natural language processing.
We explore this hypothesis through the use of a pre-trained transformer-based language model to perform code analysis tasks.
arXiv Detail & Related papers (2020-06-22T21:56:14Z) - Synthetic Datasets for Neural Program Synthesis [66.20924952964117]
We propose a new methodology for controlling and evaluating the bias of synthetic data distributions over both programs and specifications.
We demonstrate, using the Karel DSL and a small Calculator DSL, that training deep networks on these distributions leads to improved cross-distribution generalization performance.
arXiv Detail & Related papers (2019-12-27T21:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.