Rome was Not Built in a Single Step: Hierarchical Prompting for LLM-based Chip Design
- URL: http://arxiv.org/abs/2407.18276v3
- Date: Mon, 9 Sep 2024 20:01:22 GMT
- Title: Rome was Not Built in a Single Step: Hierarchical Prompting for LLM-based Chip Design
- Authors: Andre Nakkab, Sai Qian Zhang, Ramesh Karri, Siddharth Garg,
- Abstract summary: Large Language Models (LLMs) are effective in computer hardware synthesis via hardware description language (HDL) generation.
However, LLM-assisted approaches for HDL generation struggle when handling complex tasks.
We introduce a suite of hierarchical prompting techniques which facilitate efficient stepwise design methods.
- Score: 22.70660876673987
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Large Language Models (LLMs) are effective in computer hardware synthesis via hardware description language (HDL) generation. However, LLM-assisted approaches for HDL generation struggle when handling complex tasks. We introduce a suite of hierarchical prompting techniques which facilitate efficient stepwise design methods, and develop a generalizable automation pipeline for the process. To evaluate these techniques, we present a benchmark set of hardware designs which have solutions with or without architectural hierarchy. Using these benchmarks, we compare various open-source and proprietary LLMs, including our own fine-tuned Code Llama-Verilog model. Our hierarchical methods automatically produce successful designs for complex hardware modules that standard flat prompting methods cannot achieve, allowing smaller open-source LLMs to compete with large proprietary models. Hierarchical prompting reduces HDL generation time and yields savings on LLM costs. Our experiments detail which LLMs are capable of which applications, and how to apply hierarchical methods in various modes. We explore case studies of generating complex cores using automatic scripted hierarchical prompts, including the first-ever LLM-designed processor with no human feedback. Tools for the Recurrent Optimization via Machine Editing (ROME) method can be found at https://github.com/ajn313/ROME-LLM
Related papers
- Improving Parallel Program Performance Through DSL-Driven Code Generation with LLM Optimizers [9.880183350366792]
Mapping computations to processors and assigning memory are critical for maximizing performance in parallel programming.
These mapping decisions are managed through the development of specialized low-level system code, called mappers, crafted by performance engineers.
We introduce an approach that leverages recent advances in LLM-baseds for mapper design.
In under ten minutes, our method automatically discovers mappers that surpass human expert designs in scientific applications by up to 1.34X speedup.
arXiv Detail & Related papers (2024-10-21T04:08:37Z) - Open-domain Implicit Format Control for Large Language Model Generation [52.83173553689678]
We introduce a novel framework for controlled generation in large language models (LLMs)
This study investigates LLMs' capabilities to follow open-domain, one-shot constraints and replicate the format of the example answers.
We also develop a dataset collection methodology for supervised fine-tuning that enhances the open-domain format control of LLMs without degrading output quality.
arXiv Detail & Related papers (2024-08-08T11:51:45Z) - MTLLM: LLMs are Meaning-Typed Code Constructs [7.749453456370407]
This paper presents a simplified approach to integrating large language models (LLMs) into programming.
Our approach utilizes the semantic richness in existing programs to automatically translate between the traditional programming languages and the natural language.
We present a fully functional and production-grade implementation for our approach and compare it to SOTA LLM software development tools.
arXiv Detail & Related papers (2024-05-14T21:12:01Z) - CodecLM: Aligning Language Models with Tailored Synthetic Data [51.59223474427153]
We introduce CodecLM, a framework for adaptively generating high-quality synthetic data for instruction-following abilities.
We first encode seed instructions into metadata, which are concise keywords generated on-the-fly to capture the target instruction distribution.
We also introduce Self-Rubrics and Contrastive Filtering during decoding to tailor data-efficient samples.
arXiv Detail & Related papers (2024-04-08T21:15:36Z) - An Embarrassingly Simple Approach for LLM with Strong ASR Capacity [56.30595787061546]
We focus on solving one of the most important tasks in the field of speech processing, with speech foundation encoders and large language models (LLM)
Recent works have complex designs such as compressing the output temporally for the speech encoder, tackling modal alignment for the projector, and utilizing parameter-efficient fine-tuning for the LLM.
We found that delicate designs are not necessary, while an embarrassingly simple composition of off-the-shelf speech encoder, LLM, and the only trainable linear projector is competent for the ASR task.
arXiv Detail & Related papers (2024-02-13T23:25:04Z) - If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code
Empowers Large Language Models to Serve as Intelligent Agents [81.60906807941188]
Large language models (LLMs) are trained on a combination of natural language and formal language (code)
Code translates high-level goals into executable steps, featuring standard syntax, logical consistency, abstraction, and modularity.
arXiv Detail & Related papers (2024-01-01T16:51:20Z) - RedCoast: A Lightweight Tool to Automate Distributed Training of LLMs on Any GPU/TPUs [32.01139974519813]
We present RedCoast, a tool crafted to automate distributed training and inference for large language models (LLMs)
We also propose a mechanism that allows for the customization of diverse ML pipelines through the definition of merely three functions.
As a result, Redco implementations exhibit significantly fewer lines of code compared to their official counterparts.
arXiv Detail & Related papers (2023-10-25T04:32:35Z) - Revisiting Prompt Engineering via Declarative Crowdsourcing [16.624577543520093]
Large language models (LLMs) are incredibly powerful at comprehending and generating data in the form of text, but are brittle and error-prone.
We put forth a vision for declarative prompt engineering.
Preliminary case studies on sorting, entity resolution, and imputation demonstrate the promise of our approach.
arXiv Detail & Related papers (2023-08-07T18:04:12Z) - LLM-Pruner: On the Structural Pruning of Large Language Models [65.02607075556742]
Large language models (LLMs) have shown remarkable capabilities in language understanding and generation.
We tackle the compression of LLMs within the bound of two constraints: being task-agnostic and minimizing the reliance on the original training dataset.
Our method, named LLM-Pruner, adopts structural pruning that selectively removes non-critical coupled structures.
arXiv Detail & Related papers (2023-05-19T12:10:53Z) - Low-code LLM: Graphical User Interface over Large Language Models [115.08718239772107]
This paper introduces a novel human-LLM interaction framework, Low-code LLM.
It incorporates six types of simple low-code visual programming interactions to achieve more controllable and stable responses.
We highlight three advantages of the low-code LLM: user-friendly interaction, controllable generation, and wide applicability.
arXiv Detail & Related papers (2023-04-17T09:27:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.