VeriGen: A Large Language Model for Verilog Code Generation
- URL: http://arxiv.org/abs/2308.00708v1
- Date: Fri, 28 Jul 2023 02:57:14 GMT
- Title: VeriGen: A Large Language Model for Verilog Code Generation
- Authors: Shailja Thakur, Baleegh Ahmad, Hammond Pearce, Benjamin Tan, Brendan
Dolan-Gavitt, Ramesh Karri, Siddharth Garg
- Abstract summary: We fine-tune pre-existing Large Language Models (LLMs) on Verilog datasets compiled from GitHub and Verilog textbooks.
Here, our fine-tuned open-source CodeGen-16B model outperforms the commercial state-of-the-art GPT-3.5-turbo model with a 1.1% overall increase.
Notably, it demonstrates a 41% improvement in generating syntactically correct Verilog code across various problem categories compared to its pre-trained counterpart.
- Score: 22.837558083876743
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this study, we explore the capability of Large Language Models (LLMs) to
automate hardware design by generating high-quality Verilog code, a common
language for designing and modeling digital systems. We fine-tune pre-existing
LLMs on Verilog datasets compiled from GitHub and Verilog textbooks. We
evaluate the functional correctness of the generated Verilog code using a
specially designed test suite, featuring a custom problem set and testing
benches. Here, our fine-tuned open-source CodeGen-16B model outperforms the
commercial state-of-the-art GPT-3.5-turbo model with a 1.1% overall increase.
Upon testing with a more diverse and complex problem set, we find that the
fine-tuned model shows competitive performance against state-of-the-art
gpt-3.5-turbo, excelling in certain scenarios. Notably, it demonstrates a 41%
improvement in generating syntactically correct Verilog code across various
problem categories compared to its pre-trained counterpart, highlighting the
potential of smaller, in-house LLMs in hardware design automation.
Related papers
- CraftRTL: High-quality Synthetic Data Generation for Verilog Code Models with Correct-by-Construction Non-Textual Representations and Targeted Code Repair [4.554742043916029]
This paper first presents an analysis of fine-tuned LLMs on Verilog coding, with synthetic data from prior methods.
We identify two main issues: difficulties in handling non-textual representations and significant variability during training with models randomly making "minor" mistakes.
Our fine-tuned Starcoder2-15B outperforms prior state-of-the-art results by 3.8%, 10.9%, 6.6% for pass@1 on VerilogEval-Machine, VerilogEval-Human, and RTLLM.
arXiv Detail & Related papers (2024-09-19T12:15:55Z) - CodeV: Empowering LLMs for Verilog Generation through Multi-Level Summarization [37.4446786461791]
This paper introduces CodeV, a series of open-source instruction-tuned Verilog generation LLMs.
We show that CodeV relatively surpasses the previous open-source SOTA by 14.4% (BetterV in VerilogEval) and 11.3% (RTLCoder in RTLLM) respectively.
arXiv Detail & Related papers (2024-07-15T03:57:20Z) - A Multi-Expert Large Language Model Architecture for Verilog Code Generation [5.159745269633967]
This paper introduces an innovative multi-expert LLM architecture for Verilog code generation (MEV-LLM)
Our architecture uniquely integrates multiple LLMs, each specifically fine-tuned with a dataset that is categorized with respect to a distinct level of design complexity.
Empirical evidence from experiments highlights notable improvements in terms of the percentage of generated Verilog outputs that are syntactically and functionally correct.
arXiv Detail & Related papers (2024-04-11T16:58:29Z) - Data is all you need: Finetuning LLMs for Chip Design via an Automated design-data augmentation framework [50.02710905062184]
This paper proposes an automated design-data augmentation framework, which generates high-volume and high-quality natural language aligned with Verilog and EDA scripts.
The accuracy of Verilog generation surpasses that of the current state-of-the-art open-source Verilog generation model, increasing from 58.8% to 70.6% with the same benchmark.
arXiv Detail & Related papers (2024-03-17T13:01:03Z) - Design2Code: Benchmarking Multimodal Code Generation for Automated Front-End Engineering [74.99736967448423]
We construct Design2Code - the first real-world benchmark for this task.
We manually curate 484 diverse real-world webpages as test cases and develop a set of automatic evaluation metrics.
Our fine-grained break-down metrics indicate that models mostly lag in recalling visual elements from the input webpages and generating correct layout designs.
arXiv Detail & Related papers (2024-03-05T17:56:27Z) - StarCoder 2 and The Stack v2: The Next Generation [105.93298676368798]
We train StarCoder2 models with 3B, 7B, and 15B parameters on 3.3 to 4.3 trillion tokens.
We thoroughly evaluate them on a comprehensive set of Code LLM benchmarks.
Our large model, StarCoder2- 15B, significantly outperforms other models of comparable size.
arXiv Detail & Related papers (2024-02-29T13:53:35Z) - LLM-Assisted Code Cleaning For Training Accurate Code Generators [53.087019724256606]
We investigate data quality for code and find that making the code more structured and readable leads to improved code generation performance of the system.
We build a novel data-cleaning pipeline that uses these principles to transform existing programs.
We evaluate our approach on two challenging algorithmic code generation benchmarks and find that fine-tuning CodeLLaMa-7B improves the performance by up to 30% compared to fine-tuning on the original dataset.
arXiv Detail & Related papers (2023-11-25T02:45:50Z) - VerilogEval: Evaluating Large Language Models for Verilog Code
Generation [6.88526119890374]
We present a comprehensive evaluation dataset consisting of 156 problems from the Verilog instructional website HDLBits.
The evaluation set consists of a diverse set of Verilog code generation tasks, ranging from simple combinational circuits to complex finite state machines.
arXiv Detail & Related papers (2023-09-14T09:15:34Z) - Teaching Large Language Models to Self-Debug [62.424077000154945]
Large language models (LLMs) have achieved impressive performance on code generation.
We propose Self- Debugging, which teaches a large language model to debug its predicted program via few-shot demonstrations.
arXiv Detail & Related papers (2023-04-11T10:43:43Z) - Benchmarking Large Language Models for Automated Verilog RTL Code
Generation [21.747037230069854]
We characterize the ability of large language models (LLMs) to generate useful Verilog.
We construct an evaluation framework comprising test-benches for functional analysis and a flow to test the syntax of Verilog code.
Our findings show that across our problem scenarios, the fine-tuning results in LLMs more capable of producing syntactically correct code.
arXiv Detail & Related papers (2022-12-13T16:34:39Z) - Measuring Coding Challenge Competence With APPS [54.22600767666257]
We introduce APPS, a benchmark for code generation.
Our benchmark includes 10,000 problems, which range from having simple one-line solutions to being substantial algorithmic challenges.
Recent models such as GPT-Neo can pass approximately 15% of the test cases of introductory problems.
arXiv Detail & Related papers (2021-05-20T17:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.