From English to ASIC: Hardware Implementation with Large Language Model
- URL: http://arxiv.org/abs/2403.07039v1
- Date: Mon, 11 Mar 2024 09:57:16 GMT
- Title: From English to ASIC: Hardware Implementation with Large Language Model
- Authors: Emil Goh, Maoyang Xiang, I-Chyn Wey, T. Hui Teo
- Abstract summary: This paper focuses on the fine-tuning of the leading-edge nature language model and the reshuffling of the HDL code dataset.
The fine-tuning aims to enhance models' proficiency in generating precise and efficient ASIC design.
The dataset reshuffling is intended to broaden the scope and improve the quality of training material.
- Score: 0.210674772139335
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the realm of ASIC engineering, the landscape has been significantly
reshaped by the rapid development of LLM, paralleled by an increase in the
complexity of modern digital circuits. This complexity has escalated the
requirements for HDL coding, necessitating a higher degree of precision and
sophistication. However, challenges have been faced due to the
less-than-optimal performance of modern language models in generating hardware
description code, a situation further exacerbated by the scarcity of the
corresponding high-quality code datasets. These challenges have highlighted the
gap between the potential of LLMs to revolutionize digital circuit design and
their current capabilities in accurately interpreting and implementing hardware
specifications. To address these challenges, a strategy focusing on the
fine-tuning of the leading-edge nature language model and the reshuffling of
the HDL code dataset has been developed. The fine-tuning aims to enhance
models' proficiency in generating precise and efficient ASIC design, while the
dataset reshuffling is intended to broaden the scope and improve the quality of
training material. The model demonstrated significant improvements compared to
the base model, with approximately 10% to 20% increase in accuracy across a
wide range of temperature for the pass@1 metric. This approach is expected to
facilitate a simplified and more efficient LLM-assisted framework for complex
circuit design, leveraging their capabilities to meet the sophisticated demands
of HDL coding and thus streamlining the ASIC development process.
Related papers
- Exploring Code Language Models for Automated HLS-based Hardware Generation: Benchmark, Infrastructure and Analysis [49.998130983414924]
Large language models (LLMs) can be employed for programming languages such as Python and C++.
This paper explores leveraging LLMs to generate High-Level Synthesis (HLS)-based hardware design.
arXiv Detail & Related papers (2025-02-19T17:53:59Z) - A Text-Based Knowledge-Embedded Soft Sensing Modeling Approach for General Industrial Process Tasks Based on Large Language Model [16.842988666530204]
Data-driven soft sensors (DDSS) have become mainstream methods for predicting key performance indicators in process industries.
Development requires complex and costly customized designs tailored to various tasks during the modeling process.
We propose a general framework named LLM-TKESS (large language model for text-based knowledge-embedded soft sensing) for enhanced soft sensing modeling.
arXiv Detail & Related papers (2025-01-09T08:59:14Z) - HADES: Hardware Accelerated Decoding for Efficient Speculation in Large Language Models [1.2180334969164464]
Large Language Models (LLMs) have revolutionized natural language processing by understanding and generating human-like text.
This paper introduces Hardware Accelerated Decoding (HADES), a novel approach to enhance the performance and energy efficiency of LLMs.
arXiv Detail & Related papers (2024-12-27T21:19:01Z) - SynerGen-VL: Towards Synergistic Image Understanding and Generation with Vision Experts and Token Folding [66.74446220401296]
We propose SynerGen-VL, a simple yet powerful encoder-free MLLM capable of both image understanding and generation.
We introduce the token folding mechanism and the vision-expert-based progressive alignment pretraining strategy, which effectively support high-resolution image understanding.
Our code and models shall be released.
arXiv Detail & Related papers (2024-12-12T18:59:26Z) - HiVeGen -- Hierarchical LLM-based Verilog Generation for Scalable Chip Design [55.54477725000291]
HiVeGen is a hierarchical Verilog generation framework that decomposes generation tasks into hierarchical submodules.
automatic Design Space Exploration (DSE) into hierarchy-aware prompt generation, introducing weight-based retrieval to enhance code reuse.
Real-time human-computer interaction to lower error-correction cost, significantly improving the quality of generated designs.
arXiv Detail & Related papers (2024-12-06T19:37:53Z) - Are LLMs Any Good for High-Level Synthesis? [1.3927943269211591]
Large Language Models (LLMs) can streamline or replace the High-Level Synthesis (HLS) process.
LLMs can understand natural language specifications and translate C code or natural language specifications.
This study aims to illuminate the role of LLMs in HLS, identifying promising directions for optimized hardware design in applications such as AI acceleration, embedded systems, and high-performance computing.
arXiv Detail & Related papers (2024-08-19T21:40:28Z) - SOLO: A Single Transformer for Scalable Vision-Language Modeling [74.05173379908703]
We present SOLO, a single transformer for visiOn-Language mOdeling.
A unified single Transformer architecture, like SOLO, effectively addresses these scalability concerns in LVLMs.
In this paper, we introduce the first open-source training recipe for developing SOLO, an open-source 7B LVLM.
arXiv Detail & Related papers (2024-07-08T22:40:15Z) - Digital ASIC Design with Ongoing LLMs: Strategies and Prospects [0.0]
Large Language Models (LLMs) have been seen as a promising development, with the potential to automate the generation of Hardware Description Language (HDL) code.
This paper presents targeted strategies to harness the capabilities of LLMs for digital ASIC design.
arXiv Detail & Related papers (2024-04-25T05:16:57Z) - LLM4EDA: Emerging Progress in Large Language Models for Electronic
Design Automation [74.7163199054881]
Large Language Models (LLMs) have demonstrated their capability in context understanding, logic reasoning and answer generation.
We present a systematic study on the application of LLMs in the EDA field.
We highlight the future research direction, focusing on applying LLMs in logic synthesis, physical design, multi-modal feature extraction and alignment of circuits.
arXiv Detail & Related papers (2023-12-28T15:09:14Z) - CodeRL: Mastering Code Generation through Pretrained Models and Deep
Reinforcement Learning [92.36705236706678]
"CodeRL" is a new framework for program synthesis tasks through pretrained LMs and deep reinforcement learning.
During inference, we introduce a new generation procedure with a critical sampling strategy.
For the model backbones, we extended the encoder-decoder architecture of CodeT5 with enhanced learning objectives.
arXiv Detail & Related papers (2022-07-05T02:42:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.