DAVE: Deriving Automatically Verilog from English
- URL: http://arxiv.org/abs/2009.01026v1
- Date: Thu, 27 Aug 2020 15:25:03 GMT
- Title: DAVE: Deriving Automatically Verilog from English
- Authors: Hammond Pearce, Benjamin Tan, Ramesh Karri
- Abstract summary: Engineers undertake significant efforts to translate specifications for digital systems into programming languages.
We explore the use of state-of-the-art machine learning (ML) to automatically derive Verilog snippets from English.
- Score: 18.018512051180714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While specifications for digital systems are provided in natural language,
engineers undertake significant efforts to translate them into the programming
languages understood by compilers for digital systems. Automating this process
allows designers to work with the language in which they are most comfortable
--the original natural language -- and focus instead on other downstream design
challenges. We explore the use of state-of-the-art machine learning (ML) to
automatically derive Verilog snippets from English via fine-tuning GPT-2, a
natural language ML system. We describe our approach for producing a suitable
dataset of novice-level digital design tasks and provide a detailed exploration
of GPT-2, finding encouraging translation performance across our task sets
(94.8% correct), with the ability to handle both simple and abstract design
tasks.
Related papers
- Improving Speech Emotion Recognition in Under-Resourced Languages via Speech-to-Speech Translation with Bootstrapping Data Selection [49.27067541740956]
Speech Emotion Recognition (SER) is a crucial component in developing general-purpose AI agents capable of natural human-computer interaction.
Building robust multilingual SER systems remains challenging due to the scarcity of labeled data in languages other than English and Chinese.
We propose an approach to enhance SER performance in low SER resource languages by leveraging data from high-resource languages.
arXiv Detail & Related papers (2024-09-17T08:36:45Z) - CodeGRAG: Bridging the Gap between Natural Language and Programming Language via Graphical Retrieval Augmented Generation [58.84212778960507]
We propose CodeGRAG, a Graphical Retrieval Augmented Code Generation framework to enhance the performance of LLMs.
CodeGRAG builds the graphical view of code blocks based on the control flow and data flow of them to fill the gap between programming languages and natural language.
Various experiments and ablations are done on four datasets including both the C++ and python languages to validate the hard meta-graph prompt, the soft prompting technique, and the effectiveness of the objectives for pretrained GNN expert.
arXiv Detail & Related papers (2024-05-03T02:48:55Z) - Natural Language to Verilog: Design of a Recurrent Spiking Neural Network using Large Language Models and ChatGPT [0.08388591755871733]
We employ OpenAI's ChatGPT4 and natural language prompts to generate hardware description code, namely Verilog.
The resultant design was validated in three simple machine learning tasks, the exclusive OR, the IRIS flower classification and the MNIST hand-written digit classification.
The design was submitted to Efabless Tiny Tapeout 6.
arXiv Detail & Related papers (2024-05-02T16:08:08Z) - Natural Language as Policies: Reasoning for Coordinate-Level Embodied Control with LLMs [7.746160514029531]
We demonstrate experimental results with LLMs that address robotics task planning problems.
Our approach acquires text descriptions of the task and scene objects, then formulates task planning through natural language reasoning.
Our approach is evaluated on a multi-modal prompt simulation benchmark.
arXiv Detail & Related papers (2024-03-20T17:58:12Z) - Language and Task Arithmetic with Parameter-Efficient Layers for Zero-Shot Summarization [126.96113831681338]
In this paper, we propose to improve zero-shot cross-lingual transfer by composing language or task specialized parameters.
Our method composes language and task PEFT modules via element-wise arithmetic operations to leverage unlabeled data and English labeled data.
arXiv Detail & Related papers (2023-11-15T20:04:58Z) - Soft Language Clustering for Multilingual Model Pre-training [57.18058739931463]
We propose XLM-P, which contextually retrieves prompts as flexible guidance for encoding instances conditionally.
Our XLM-P enables (1) lightweight modeling of language-invariant and language-specific knowledge across languages, and (2) easy integration with other multilingual pre-training methods.
arXiv Detail & Related papers (2023-06-13T08:08:08Z) - Online Gesture Recognition using Transformer and Natural Language
Processing [0.0]
Transformer architecture is shown to provide a powerful machine framework for online gestures corresponding to glyph strokes of natural language sentences.
Transformer architecture is shown to provide a powerful machine framework for online gestures corresponding to glyph strokes of natural language sentences.
arXiv Detail & Related papers (2023-05-05T10:17:22Z) - Building Machine Translation Systems for the Next Thousand Languages [102.24310122155073]
We describe results in three research domains: building clean, web-mined datasets for 1500+ languages, developing practical MT models for under-served languages, and studying the limitations of evaluation metrics for these languages.
We hope that our work provides useful insights to practitioners working towards building MT systems for currently understudied languages, and highlights research directions that can complement the weaknesses of massively multilingual models in data-sparse settings.
arXiv Detail & Related papers (2022-05-09T00:24:13Z) - Explore More Guidance: A Task-aware Instruction Network for Sign
Language Translation Enhanced with Data Augmentation [20.125265661134964]
Sign language recognition and translation first uses a recognition module to generate glosses from sign language videos.
In this work, we propose a task-aware instruction network, namely TIN-SLT, for sign language translation.
arXiv Detail & Related papers (2022-04-12T17:09:44Z) - From English to Signal Temporal Logic [7.6398837478968025]
We propose DeepSTL, a tool and technique for the translation of informal requirements, given as free English sentences, into Signal Temporal Logic (STL), a formal specification language for cyber-physical systems.
A major challenge to devise such a translator is the lack of publicly available informal requirements and formal specifications.
In the second step, we use a state-of-the-art transformer-based neural translation technique, to train an accurate attentional translator of English to STL.
arXiv Detail & Related papers (2021-09-21T16:13:29Z) - Exploring Software Naturalness through Neural Language Models [56.1315223210742]
The Software Naturalness hypothesis argues that programming languages can be understood through the same techniques used in natural language processing.
We explore this hypothesis through the use of a pre-trained transformer-based language model to perform code analysis tasks.
arXiv Detail & Related papers (2020-06-22T21:56:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.