iDSE: Navigating Design Space Exploration in High-Level Synthesis Using LLMs
- URL: http://arxiv.org/abs/2505.22086v2
- Date: Sat, 31 May 2025 11:52:20 GMT
- Title: iDSE: Navigating Design Space Exploration in High-Level Synthesis Using LLMs
- Authors: Runkai Li, Jia Xiong, Xi Wang,
- Abstract summary: High-Level Synthesis serves as an agile hardware development tool.<n>Traditional design space exploration (DSE) methods still suffer from prohibitive exploration costs and suboptimal results.<n>We introduce iDSE, the first LLM-aided DSE framework that leverages design quality perception to effectively navigate the design space.
- Score: 3.578537533079004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-Level Synthesis (HLS) serves as an agile hardware development tool that streamlines the circuit design by abstracting the register transfer level into behavioral descriptions, while allowing designers to customize the generated microarchitectures through optimization directives. However, the combinatorial explosion of possible directive configurations yields an intractable design space. Traditional design space exploration (DSE) methods, despite adopting heuristics or constructing predictive models to accelerate Pareto-optimal design acquisition, still suffer from prohibitive exploration costs and suboptimal results. Addressing these concerns, we introduce iDSE, the first LLM-aided DSE framework that leverages HLS design quality perception to effectively navigate the design space. iDSE intelligently pruns the design space to guide LLMs in calibrating representative initial sampling designs, expediting convergence toward the Pareto front. By exploiting the convergent and divergent thinking patterns inherent in LLMs for hardware optimization, iDSE achieves multi-path refinement of the design quality and diversity. Extensive experiments demonstrate that iDSE outperforms heuristic-based DSE methods by 5.1$\times$$\sim$16.6$\times$ in proximity to the reference Pareto front, matching NSGA-II with only 4.6% of the explored designs. Our work demonstrates the transformative potential of LLMs in scalable and efficient HLS design optimization, offering new insights into multiobjective optimization challenges.
Related papers
- CROP: Circuit Retrieval and Optimization with Parameter Guidance using LLMs [4.481239665281804]
We present CROP, the first large language model (LLM)-powered automatic VLSI design flow tuning framework.<n>Our approach includes: (1) a scalable methodology for transforming RTL source code into dense vector representations, (2) an embedding-based retrieval system for matching designs with semantically similar circuits, and (3) a retrieval-augmented generation (RAG)-enhanced LLM-guided parameter search system.<n>Experiment results demonstrate CROP's ability to achieve superior quality-of-results (QoR) with fewer iterations than existing approaches on industrial designs, including a 9.9% reduction in power consumption.
arXiv Detail & Related papers (2025-07-02T20:25:47Z) - ExpertSteer: Intervening in LLMs through Expert Knowledge [71.12193680015622]
Activation steering offers a promising method to control the generation process of Large Language Models.<n>We propose ExpertSteer, a novel approach that leverages arbitrary specialized expert models to generate steering vectors.<n>We conduct comprehensive experiments using three LLMs on 15 popular benchmarks across four distinct domains.
arXiv Detail & Related papers (2025-05-18T08:55:46Z) - Direct Retrieval-augmented Optimization: Synergizing Knowledge Selection and Language Models [83.8639566087953]
We propose a direct retrieval-augmented optimization framework, named DRO, that enables end-to-end training of two key components.<n>DRO alternates between two phases: (i) document permutation estimation and (ii) re-weighted, progressively improving RAG components.<n>Our theoretical analysis reveals that DRO is analogous to policy-gradient methods in reinforcement learning.
arXiv Detail & Related papers (2025-05-05T23:54:53Z) - Can Reasoning Models Reason about Hardware? An Agentic HLS Perspective [18.791753740931185]
OpenAI o3-mini and DeepSeek-R1 use enhanced reasoning through Chain-of-Thought (CoT)<n>This paper investigates whether reasoning LLMs can address challenges in High-Level Synthesis (HLS) design space exploration and optimization.
arXiv Detail & Related papers (2025-03-17T01:21:39Z) - AIRCHITECT v2: Learning the Hardware Accelerator Design Space through Unified Representations [3.6231171463908938]
Design space exploration plays a crucial role in enabling custom hardware architectures.<n>Recently, AIrchitect v1, the first attempt to address the limitations of DSE into a search-time classification problem.
arXiv Detail & Related papers (2025-01-17T04:57:42Z) - Learning to Compare Hardware Designs for High-Level Synthesis [44.408523725466374]
High-level synthesis (HLS) is an automated design process that transforms high-level code into hardware designs.<n>HLS relies on pragmas, which are directives inserted into the source code to guide the synthesis process.<n>We propose compareXplore, a novel approach that learns to compare hardware designs for effective HLS optimization.
arXiv Detail & Related papers (2024-09-20T00:47:29Z) - Deep Inverse Design for High-Level Synthesis [1.9029532975354944]
We propose Deep Inverse Design for HLS (DID4HLS), a novel approach that integrates graph neural networks and generative models.<n>DID4HLS iteratively optimize hardware designs aimed at compute-intensive algorithms by learning conditional distributions of design features from post-HLS data.<n>Compared to four state-of-the-art DSE baselines, our method achieved an average improvement of 42.8% on average distance to reference set.
arXiv Detail & Related papers (2024-07-11T18:13:38Z) - Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark [166.40879020706151]
This paper proposes a shift towards BP-free, zeroth-order (ZO) optimization as a solution for reducing memory costs during fine-tuning.
Unlike traditional ZO-SGD methods, our work expands the exploration to a wider array of ZO optimization techniques.
Our study unveils previously overlooked optimization principles, highlighting the importance of task alignment, the role of the forward gradient method, and the balance between algorithm complexity and fine-tuning performance.
arXiv Detail & Related papers (2024-02-18T14:08:48Z) - An Embarrassingly Simple Approach for LLM with Strong ASR Capacity [56.30595787061546]
We focus on solving one of the most important tasks in the field of speech processing, with speech foundation encoders and large language models (LLM)
Recent works have complex designs such as compressing the output temporally for the speech encoder, tackling modal alignment for the projector, and utilizing parameter-efficient fine-tuning for the LLM.
We found that delicate designs are not necessary, while an embarrassingly simple composition of off-the-shelf speech encoder, LLM, and the only trainable linear projector is competent for the ASR task.
arXiv Detail & Related papers (2024-02-13T23:25:04Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Optimization-Inspired Learning with Architecture Augmentations and
Control Mechanisms for Low-Level Vision [74.9260745577362]
This paper proposes a unified optimization-inspired learning framework to aggregate Generative, Discriminative, and Corrective (GDC) principles.
We construct three propagative modules to effectively solve the optimization models with flexible combinations.
Experiments across varied low-level vision tasks validate the efficacy and adaptability of GDC.
arXiv Detail & Related papers (2020-12-10T03:24:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.