LLM4EDA: Emerging Progress in Large Language Models for Electronic
Design Automation
- URL: http://arxiv.org/abs/2401.12224v1
- Date: Thu, 28 Dec 2023 15:09:14 GMT
- Title: LLM4EDA: Emerging Progress in Large Language Models for Electronic
Design Automation
- Authors: Ruizhe Zhong, Xingbo Du, Shixiong Kai, Zhentao Tang, Siyuan Xu,
Hui-Ling Zhen, Jianye Hao, Qiang Xu, Mingxuan Yuan, Junchi Yan
- Abstract summary: Large Language Models (LLMs) have demonstrated their capability in context understanding, logic reasoning and answer generation.
We present a systematic study on the application of LLMs in the EDA field.
We highlight the future research direction, focusing on applying LLMs in logic synthesis, physical design, multi-modal feature extraction and alignment of circuits.
- Score: 74.7163199054881
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Driven by Moore's Law, the complexity and scale of modern chip design are
increasing rapidly. Electronic Design Automation (EDA) has been widely applied
to address the challenges encountered in the full chip design process. However,
the evolution of very large-scale integrated circuits has made chip design
time-consuming and resource-intensive, requiring substantial prior expert
knowledge. Additionally, intermediate human control activities are crucial for
seeking optimal solutions. In system design stage, circuits are usually
represented with Hardware Description Language (HDL) as a textual format.
Recently, Large Language Models (LLMs) have demonstrated their capability in
context understanding, logic reasoning and answer generation. Since circuit can
be represented with HDL in a textual format, it is reasonable to question
whether LLMs can be leveraged in the EDA field to achieve fully automated chip
design and generate circuits with improved power, performance, and area (PPA).
In this paper, we present a systematic study on the application of LLMs in the
EDA field, categorizing it into the following cases: 1) assistant chatbot, 2)
HDL and script generation, and 3) HDL verification and analysis. Additionally,
we highlight the future research direction, focusing on applying LLMs in logic
synthesis, physical design, multi-modal feature extraction and alignment of
circuits. We collect relevant papers up-to-date in this field via the following
link: https://github.com/Thinklab-SJTU/Awesome-LLM4EDA.
Related papers
- Are LLMs Any Good for High-Level Synthesis? [1.3927943269211591]
Large Language Models (LLMs) can streamline or replace the High-Level Synthesis (HLS) process.
LLMs can understand natural language specifications and translate C code or natural language specifications.
This study aims to illuminate the role of LLMs in HLS, identifying promising directions for optimized hardware design in applications such as AI acceleration, embedded systems, and high-performance computing.
arXiv Detail & Related papers (2024-08-19T21:40:28Z) - Case2Code: Learning Inductive Reasoning with Synthetic Data [105.89741089673575]
We propose a textbfCase2Code task by exploiting the expressiveness and correctness of programs.
We first evaluate representative LLMs on the synthesized Case2Code task and demonstrate that the Case-to-code induction is challenging for LLMs.
Experimental results show that such induction training benefits not only in distribution Case2Code performance but also enhances various coding abilities of trained LLMs.
arXiv Detail & Related papers (2024-07-17T11:35:00Z) - APEER: Automatic Prompt Engineering Enhances Large Language Model Reranking [39.649879274238856]
We introduce a novel automatic prompt engineering algorithm named APEER.
APEER iteratively generates refined prompts through feedback and preference optimization.
Experiments demonstrate the substantial performance improvement of APEER over existing state-of-the-art (SoTA) manual prompts.
arXiv Detail & Related papers (2024-06-20T16:11:45Z) - Digital ASIC Design with Ongoing LLMs: Strategies and Prospects [0.0]
Large Language Models (LLMs) have been seen as a promising development, with the potential to automate the generation of Hardware Description Language (HDL) code.
This paper presents targeted strategies to harness the capabilities of LLMs for digital ASIC design.
arXiv Detail & Related papers (2024-04-25T05:16:57Z) - If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code
Empowers Large Language Models to Serve as Intelligent Agents [81.60906807941188]
Large language models (LLMs) are trained on a combination of natural language and formal language (code)
Code translates high-level goals into executable steps, featuring standard syntax, logical consistency, abstraction, and modularity.
arXiv Detail & Related papers (2024-01-01T16:51:20Z) - Federated Fine-Tuning of LLMs on the Very Edge: The Good, the Bad, the Ugly [62.473245910234304]
This paper takes a hardware-centric approach to explore how Large Language Models can be brought to modern edge computing systems.
We provide a micro-level hardware benchmark, compare the model FLOP utilization to a state-of-the-art data center GPU, and study the network utilization in realistic conditions.
arXiv Detail & Related papers (2023-10-04T20:27:20Z) - Revisiting Prompt Engineering via Declarative Crowdsourcing [16.624577543520093]
Large language models (LLMs) are incredibly powerful at comprehending and generating data in the form of text, but are brittle and error-prone.
We put forth a vision for declarative prompt engineering.
Preliminary case studies on sorting, entity resolution, and imputation demonstrate the promise of our approach.
arXiv Detail & Related papers (2023-08-07T18:04:12Z) - Large Language Models as General Pattern Machines [64.75501424160748]
We show that pre-trained large language models (LLMs) are capable of autoregressively completing complex token sequences.
Surprisingly, pattern completion proficiency can be partially retained even when the sequences are expressed using tokens randomly sampled from the vocabulary.
In this work, we investigate how these zero-shot capabilities may be applied to problems in robotics.
arXiv Detail & Related papers (2023-07-10T17:32:13Z) - A Survey on Multimodal Large Language Models [71.63375558033364]
Multimodal Large Language Model (MLLM) represented by GPT-4V has been a new rising research hotspot.
This paper aims to trace and summarize the recent progress of MLLMs.
arXiv Detail & Related papers (2023-06-23T15:21:52Z) - Intelligent Circuit Design and Implementation with Machine Learning [0.0]
I present multiple fast yet accurate machine learning models covering a wide range of chip design stages.
I present APOLLO, a fully automated power modeling framework.
I also present RouteNet for early routability prediction.
arXiv Detail & Related papers (2022-06-07T06:17:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.