High-Level Synthesis Performance Prediction using GNNs: Benchmarking,
Modeling, and Advancing
- URL: http://arxiv.org/abs/2201.06848v1
- Date: Tue, 18 Jan 2022 09:53:48 GMT
- Title: High-Level Synthesis Performance Prediction using GNNs: Benchmarking,
Modeling, and Advancing
- Authors: Nan Wu, Hang Yang, Yuan Xie, Pan Li, Cong Hao
- Abstract summary: Agile hardware development requires fast and accurate circuit quality evaluation from early design stages.
We propose a rapid and accurate performance modeling, exploiting the representation power of graph neural networks (GNNs) by representing C/C++ programs as graphs.
Our proposed predictor largely outperforms HLS by up to 40X and excels existing predictors by 2X to 5X in terms of resource usage and timing prediction.
- Score: 21.8349113634555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Agile hardware development requires fast and accurate circuit quality
evaluation from early design stages. Existing work of high-level synthesis
(HLS) performance prediction usually needs extensive feature engineering after
the synthesis process. To expedite circuit evaluation from as earlier design
stage as possible, we propose a rapid and accurate performance modeling,
exploiting the representation power of graph neural networks (GNNs) by
representing C/C++ programs as graphs. The contribution of this work is
three-fold. First, we build a standard benchmark containing 40k C synthesizable
programs, which includes both synthetic programs and three sets of real-world
HLS benchmarks. Each program is implemented on FPGA to generate ground-truth
performance metrics. Second, we formally formulate the HLS performance
prediction problem on graphs, and propose multiple modeling strategies with
GNNs that leverage different trade-offs between prediction timeliness
(early/late prediction) and accuracy. Third, we further propose a novel
hierarchical GNN that does not sacrifice timeliness but largely improves
prediction accuracy, significantly outperforming HLS tools. We apply extensive
evaluations for both synthetic and unseen real-case programs; our proposed
predictor largely outperforms HLS by up to 40X and excels existing predictors
by 2X to 5X in terms of resource usage and timing prediction.
Related papers
- rule4ml: An Open-Source Tool for Resource Utilization and Latency Estimation for ML Models on FPGA [0.0]
This paper introduces a novel method to predict the resource utilization and inference latency of Neural Networks (NNs) before their synthesis and implementation on FPGA.
We leverage HLS4ML, a tool-flow that helps translate NNs into high-level synthesis (HLS) code.
Our method uses trained regression models for immediate pre-synthesis predictions.
arXiv Detail & Related papers (2024-08-09T19:35:10Z) - PerfSAGE: Generalized Inference Performance Predictor for Arbitrary Deep
Learning Models on Edge Devices [8.272409756443539]
This paper describes PerfSAGE, a novel graph neural network (GNN) that predicts inference latency, energy, and memory footprint on an arbitrary DNNlite graph.
Using this dataset, we train PerfSAGE and provide experimental results that demonstrate state-of-the-art prediction accuracy with a Mean Absolute Percentage Error of 5% across all targets and model search spaces.
arXiv Detail & Related papers (2023-01-26T08:59:15Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - TEP-GNN: Accurate Execution Time Prediction of Functional Tests using
Graph Neural Networks [5.899031548148629]
We propose a predictive model, dubbed TEP-GNN, which demonstrates that high-accuracy performance prediction is possible.
TEP-GNN uses FA-ASTs, or flow-augmented ASTs, as a graph-based code representation approach.
We evaluate TEP-GNN using four real-life Java open source programs, based on 922 test files mined from the projects' public repositories.
arXiv Detail & Related papers (2022-08-25T09:08:32Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Hybrid Graph Models for Logic Optimization via Spatio-Temporal
Information [15.850413267830522]
Two major concerns that may impede production-ready ML applications in EDA are accuracy requirements and generalization capability.
We propose hybrid graph neural network (GNN) based approaches towards highly accurate quality-of-result (QoR) estimations.
Evaluation on 3.3 million data points shows that the testing mean absolute percentage error (MAPE) on designs seen unseen during training are no more than 1.2% and 3.1%.
arXiv Detail & Related papers (2022-01-20T21:12:22Z) - A Graph Deep Learning Framework for High-Level Synthesis Design Space
Exploration [11.154086943903696]
High-Level Synthesis is a solution for fast prototyping application-specific hardware.
We propose HLS, for the first time in the literature, graph neural networks that jointly predict acceleration performance and hardware costs.
We show that our approach achieves prediction accuracy comparable with that of commonly used simulators.
arXiv Detail & Related papers (2021-11-29T18:17:45Z) - Towards More Fine-grained and Reliable NLP Performance Prediction [85.78131503006193]
We make two contributions to improving performance prediction for NLP tasks.
First, we examine performance predictors for holistic measures of accuracy like F1 or BLEU.
Second, we propose methods to understand the reliability of a performance prediction model from two angles: confidence intervals and calibration.
arXiv Detail & Related papers (2021-02-10T15:23:20Z) - Learning to Execute Programs with Instruction Pointer Attention Graph
Neural Networks [55.98291376393561]
Graph neural networks (GNNs) have emerged as a powerful tool for learning software engineering tasks.
Recurrent neural networks (RNNs) are well-suited to long sequential chains of reasoning, but they do not naturally incorporate program structure.
We introduce a novel GNN architecture, the Instruction Pointer Attention Graph Neural Networks (IPA-GNN), which improves systematic generalization on the task of learning to execute programs.
arXiv Detail & Related papers (2020-10-23T19:12:30Z) - FBNetV3: Joint Architecture-Recipe Search using Predictor Pretraining [65.39532971991778]
We present an accuracy predictor that scores architecture and training recipes jointly, guiding both sample selection and ranking.
We run fast evolutionary searches in just CPU minutes to generate architecture-recipe pairs for a variety of resource constraints.
FBNetV3 makes up a family of state-of-the-art compact neural networks that outperform both automatically and manually-designed competitors.
arXiv Detail & Related papers (2020-06-03T05:20:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.