Deep Inverse Design for High-Level Synthesis
- URL: http://arxiv.org/abs/2407.08797v3
- Date: Fri, 11 Apr 2025 01:47:22 GMT
- Title: Deep Inverse Design for High-Level Synthesis
- Authors: Ping Chang, Tosiron Adegbija, Yuchao Liao, Claudio Talarico, Ao Li, Janet Roveda,
- Abstract summary: We propose Deep Inverse Design for HLS (DID4HLS), a novel approach that integrates graph neural networks and generative models.<n>DID4HLS iteratively optimize hardware designs aimed at compute-intensive algorithms by learning conditional distributions of design features from post-HLS data.<n>Compared to four state-of-the-art DSE baselines, our method achieved an average improvement of 42.8% on average distance to reference set.
- Score: 1.9029532975354944
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-level synthesis (HLS) has significantly advanced the automation of digital circuits design, yet the need for expertise and time in pragma tuning remains challenging. Existing solutions for the design space exploration (DSE) adopt either heuristic methods, lacking essential information for further optimization potential, or predictive models, missing sufficient generalization due to the time-consuming nature of HLS and the exponential growth of the design space. To address these challenges, we propose Deep Inverse Design for HLS (DID4HLS), a novel approach that integrates graph neural networks and generative models. DID4HLS iteratively optimizes hardware designs aimed at compute-intensive algorithms by learning conditional distributions of design features from post-HLS data. Compared to four state-of-the-art DSE baselines, our method achieved an average improvement of 42.8% on average distance to reference set (ADRS) compared to the best-performing baselines across six benchmarks, while demonstrating high robustness and efficiency. The code is available at https://github.com/PingChang818/DID4HLS.
Related papers
- ForgeHLS: A Large-Scale, Open-Source Dataset for High-Level Synthesis [13.87691887333415]
We introduce ForgeHLS, a large-scale, open-source dataset explicitly designed for machine learning (ML)-driven HLS research.<n> ForgeHLS comprises over 400k diverse designs generated from 846 kernels covering a broad range of application domains.<n>Compared to existing datasets, ForgeHLS significantly enhances scale, diversity, and design coverage.
arXiv Detail & Related papers (2025-07-04T02:23:46Z) - iDSE: Navigating Design Space Exploration in High-Level Synthesis Using LLMs [3.578537533079004]
High-Level Synthesis serves as an agile hardware development tool.<n>Traditional design space exploration (DSE) methods still suffer from prohibitive exploration costs and suboptimal results.<n>We introduce iDSE, the first LLM-aided DSE framework that leverages design quality perception to effectively navigate the design space.
arXiv Detail & Related papers (2025-05-28T08:08:57Z) - SimpleDeepSearcher: Deep Information Seeking via Web-Powered Reasoning Trajectory Synthesis [89.99161034065614]
Retrieval-augmented generation (RAG) systems have advanced large language models (LLMs) in complex deep search scenarios.<n>Existing approaches face critical limitations that lack high-quality training trajectories and suffer from distributional mismatches.<n>This paper introduces SimpleDeepSearcher, a framework that bridges the gap through strategic data engineering rather than complex training paradigms.
arXiv Detail & Related papers (2025-05-22T16:05:02Z) - Intelligent4DSE: Optimizing High-Level Synthesis Design Space Exploration with Graph Neural Networks and Large Language Models [3.8429489584622156]
We propose CoGNNs-LLMEA, a framework that integrates a graph neural network with task-adaptive message passing and a large language model-enhanced evolutionary algorithm.
As a predictive model, CoGNNs directly leverages intermediate representations generated from source code after compiler front-end processing, enabling prediction of quality of results (QoR) without invoking HLS tools.
CoGNNs achieves state-of-the-art prediction accuracy in post-HLS QoR prediction, reducing mean prediction errors by 2.8$times$ for latency and 3.4$times$ for resource utilization compared to baseline models
arXiv Detail & Related papers (2025-04-28T10:08:56Z) - Dynamic Noise Preference Optimization for LLM Self-Improvement via Synthetic Data [51.62162460809116]
We introduce Dynamic Noise Preference Optimization (DNPO) to ensure consistent improvements across iterations.
In experiments with Zephyr-7B, DNPO consistently outperforms existing methods, showing an average performance boost of 2.6%.
DNPO shows a significant improvement in model-generated data quality, with a 29.4% win-loss rate gap compared to the baseline in GPT-4 evaluations.
arXiv Detail & Related papers (2025-02-08T01:20:09Z) - Automatically Learning Hybrid Digital Twins of Dynamical Systems [56.69628749813084]
Digital Twins (DTs) simulate the states and temporal dynamics of real-world systems.
DTs often struggle to generalize to unseen conditions in data-scarce settings.
In this paper, we propose an evolutionary algorithm ($textbfHDTwinGen$) to autonomously propose, evaluate, and optimize HDTwins.
arXiv Detail & Related papers (2024-10-31T07:28:22Z) - Learning to Compare Hardware Designs for High-Level Synthesis [44.408523725466374]
High-level synthesis (HLS) is an automated design process that transforms high-level code into hardware designs.
HLS relies on pragmas, which are directives inserted into the source code to guide the synthesis process.
We propose compareXplore, a novel approach that learns to compare hardware designs for effective HLS optimization.
arXiv Detail & Related papers (2024-09-20T00:47:29Z) - Logic Synthesis Optimization with Predictive Self-Supervision via Causal Transformers [19.13500546022262]
We introduce LSOformer, a novel approach harnessing Autoregressive transformer models and predictive SSL to predict the trajectory of Quality of Results (QoR)
LSOformer integrates cross-attention modules to merge insights from circuit graphs and optimization sequences, thereby enhancing prediction accuracy for QoR metrics.
arXiv Detail & Related papers (2024-09-16T18:45:07Z) - Hyperdimensional Computing Empowered Federated Foundation Model over Wireless Networks for Metaverse [56.384390765357004]
We propose an integrated federated split learning and hyperdimensional computing framework for emerging foundation models.
This novel approach reduces communication costs, computation load, and privacy risks, making it suitable for resource-constrained edge devices in the Metaverse.
arXiv Detail & Related papers (2024-08-26T17:03:14Z) - Cross-Modality Program Representation Learning for Electronic Design Automation with High-Level Synthesis [45.471039079664656]
Domain-specific accelerators (DSAs) have gained popularity for applications such as deep learning and autonomous driving.
We propose ProgSG, a model that allows interaction between the source code sequence modality and the graph modality in a deep and fine-grained way.
We show that ProgSG reduces the RMSE of design performance predictions by up to $22%$, and identifies designs with an average of $1.10times$.
arXiv Detail & Related papers (2024-06-13T22:34:58Z) - AutoHLS: Learning to Accelerate Design Space Exploration for HLS Designs [10.690389829735661]
This paper proposes a novel framework called AutoHLS, which integrates a deep neural network (DNN) with Bayesian optimization (BO) to accelerate HLS hardware design optimization.
Our experimental results demonstrate up to a 70-fold speedup in exploration time.
arXiv Detail & Related papers (2024-03-15T21:14:44Z) - From English to ASIC: Hardware Implementation with Large Language Model [0.210674772139335]
This paper focuses on the fine-tuning of the leading-edge nature language model and the reshuffling of the HDL code dataset.
The fine-tuning aims to enhance models' proficiency in generating precise and efficient ASIC design.
The dataset reshuffling is intended to broaden the scope and improve the quality of training material.
arXiv Detail & Related papers (2024-03-11T09:57:16Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - Deep Latent State Space Models for Time-Series Generation [68.45746489575032]
We propose LS4, a generative model for sequences with latent variables evolving according to a state space ODE.
Inspired by recent deep state space models (S4), we achieve speedups by leveraging a convolutional representation of LS4.
We show that LS4 significantly outperforms previous continuous-time generative models in terms of marginal distribution, classification, and prediction scores on real-world datasets.
arXiv Detail & Related papers (2022-12-24T15:17:42Z) - HDNet: High-resolution Dual-domain Learning for Spectral Compressive
Imaging [138.04956118993934]
We propose a high-resolution dual-domain learning network (HDNet) for HSI reconstruction.
On the one hand, the proposed HR spatial-spectral attention module with its efficient feature fusion provides continuous and fine pixel-level features.
On the other hand, frequency domain learning (FDL) is introduced for HSI reconstruction to narrow the frequency domain discrepancy.
arXiv Detail & Related papers (2022-03-04T06:37:45Z) - A Graph Deep Learning Framework for High-Level Synthesis Design Space
Exploration [11.154086943903696]
High-Level Synthesis is a solution for fast prototyping application-specific hardware.
We propose HLS, for the first time in the literature, graph neural networks that jointly predict acceleration performance and hardware costs.
We show that our approach achieves prediction accuracy comparable with that of commonly used simulators.
arXiv Detail & Related papers (2021-11-29T18:17:45Z) - Learning A 3D-CNN and Transformer Prior for Hyperspectral Image
Super-Resolution [80.93870349019332]
We propose a novel HSISR method that uses Transformer instead of CNN to learn the prior of HSIs.
Specifically, we first use the gradient algorithm to solve the HSISR model, and then use an unfolding network to simulate the iterative solution processes.
arXiv Detail & Related papers (2021-11-27T15:38:57Z) - Accelerating gradient-based topology optimization design with dual-model
neural networks [21.343803954998915]
In this work, neural networks are used as efficient surrogate models for forward and sensitivity calculations.
To improve the accuracy of sensitivity analyses, dual-model neural networks that are trained with both forward and sensitivity data are constructed.
The efficiency gained in the problem with size of 64x64 is 137 times in forward calculation and 74 times in sensitivity analysis.
arXiv Detail & Related papers (2020-09-14T07:52:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.