Designing Silicon Brains using LLM: Leveraging ChatGPT for Automated
Description of a Spiking Neuron Array
- URL: http://arxiv.org/abs/2402.10920v1
- Date: Thu, 25 Jan 2024 21:21:38 GMT
- Title: Designing Silicon Brains using LLM: Leveraging ChatGPT for Automated
Description of a Spiking Neuron Array
- Authors: Michael Tomlinson, Joe Li, Andreas Andreou
- Abstract summary: We present the prompts used to guide ChatGPT4 to produce a synthesizable and functional verilog description for a programmable Spiking Neuron Array ASIC.
This design flow showcases the current state of using ChatGPT4 for natural language driven hardware design.
- Score: 1.137846619087643
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) have made headlines for synthesizing
correct-sounding responses to a variety of prompts, including code generation.
In this paper, we present the prompts used to guide ChatGPT4 to produce a
synthesizable and functional verilog description for the entirety of a
programmable Spiking Neuron Array ASIC. This design flow showcases the current
state of using ChatGPT4 for natural language driven hardware design. The
AI-generated design was verified in simulation using handcrafted testbenches
and has been submitted for fabrication in Skywater 130nm through Tiny Tapeout 5
using an open-source EDA flow.
Related papers
- NoviCode: Generating Programs from Natural Language Utterances by Novices [59.71218039095155]
We present NoviCode, a novel NL Programming task which takes as input an API and a natural language description by a novice non-programmer.
We show that NoviCode is indeed a challenging task in the code synthesis domain, and that generating complex code from non-technical instructions goes beyond the current Text-to-Code paradigm.
arXiv Detail & Related papers (2024-07-15T11:26:03Z) - Natural Language to Verilog: Design of a Recurrent Spiking Neural Network using Large Language Models and ChatGPT [0.08388591755871733]
We employ OpenAI's ChatGPT4 and natural language prompts to generate hardware description code, namely Verilog.
The resultant design was validated in three simple machine learning tasks, the exclusive OR, the IRIS flower classification and the MNIST hand-written digit classification.
The design was submitted to Efabless Tiny Tapeout 6.
arXiv Detail & Related papers (2024-05-02T16:08:08Z) - CodeIP: A Grammar-Guided Multi-Bit Watermark for Large Language Models of Code [56.019447113206006]
Large Language Models (LLMs) have achieved remarkable progress in code generation.
CodeIP is a novel multi-bit watermarking technique that embeds additional information to preserve provenance details.
Experiments conducted on a real-world dataset across five programming languages demonstrate the effectiveness of CodeIP.
arXiv Detail & Related papers (2024-04-24T04:25:04Z) - LLM-aided explanations of EDA synthesis errors [10.665347817363623]
Large Language Models (LLMs) have demonstrated text comprehension and question-answering capabilities.
We generate 936 error message explanations using three OpenAI LLMs over 21 different buggy code samples.
These are then graded for relevance and correctness, and we find that in approximately 71% of cases the LLMs give correct & complete explanations suitable for novice learners.
arXiv Detail & Related papers (2024-04-07T07:12:16Z) - ChatGPT for PLC/DCS Control Logic Generation [1.773257587850857]
Large language models (LLMs) providing generative AI have become popular to support software engineers in creating, summarizing, optimizing, and documenting source code.
It is still unknown how LLMs can support control engineers using typical control programming languages in programming tasks.
We created 100 LLM prompts in 10 representative categories to analyze control logic generation for of PLCs and DCS from natural language.
arXiv Detail & Related papers (2023-05-25T07:46:53Z) - Chip-Chat: Challenges and Opportunities in Conversational Hardware
Design [27.760832802199637]
Artificial intelligence (AI) has demonstrated capabilities for machine-based end-to-end translations.
Large Language Models (LLMs) claim to be able to produce code in a variety of programming languages.
We believe that this Chip-Chat' resulted in what we believe to be the world's first wholly-AI-written HDL for tapeout.
arXiv Detail & Related papers (2023-05-22T17:13:33Z) - Type-driven Neural Programming by Example [0.0]
We look into programming by example (PBE), which is about finding a program mapping given inputs to given outputs.
We propose a way to incorporate programming types into a neural program synthesis approach for PBE.
arXiv Detail & Related papers (2020-08-28T12:30:05Z) - Investigation of learning abilities on linguistic features in
sequence-to-sequence text-to-speech synthesis [48.151894340550385]
Neural sequence-to-sequence text-to-speech synthesis (TTS) can produce high-quality speech directly from text or simple linguistic features such as phonemes.
We investigate under what conditions the neural sequence-to-sequence TTS can work well in Japanese and English.
arXiv Detail & Related papers (2020-05-20T23:26:14Z) - Improved Code Summarization via a Graph Neural Network [96.03715569092523]
In general, source code summarization techniques use the source code as input and outputs a natural language description.
We present an approach that uses a graph-based neural architecture that better matches the default structure of the AST to generate these summaries.
arXiv Detail & Related papers (2020-04-06T17:36:42Z) - CodeBERT: A Pre-Trained Model for Programming and Natural Languages [117.34242908773061]
CodeBERT is a pre-trained model for programming language (PL) and nat-ural language (NL)
We develop CodeBERT with Transformer-based neural architecture.
We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters.
arXiv Detail & Related papers (2020-02-19T13:09:07Z) - Synthetic Datasets for Neural Program Synthesis [66.20924952964117]
We propose a new methodology for controlling and evaluating the bias of synthetic data distributions over both programs and specifications.
We demonstrate, using the Karel DSL and a small Calculator DSL, that training deep networks on these distributions leads to improved cross-distribution generalization performance.
arXiv Detail & Related papers (2019-12-27T21:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.