GraPhSyM: Graph Physical Synthesis Model
- URL: http://arxiv.org/abs/2308.03944v2
- Date: Thu, 7 Sep 2023 15:59:20 GMT
- Title: GraPhSyM: Graph Physical Synthesis Model
- Authors: Ahmed Agiza, Rajarshi Roy, Teodor Dumitru Ene, Saad Godil, Sherief
Reda, Bryan Catanzaro
- Abstract summary: We introduce GraPhSyM, a Graph Attention Network (GATv2) model for estimation of post-physical circuit delay and area metrics from pre-physical synthesis circuit netlists.
GraPhSyM provides accurate visibility of final design metrics to early EDA stages, enabling global co-optimization across stages.
- Score: 21.568740364211983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we introduce GraPhSyM, a Graph Attention Network (GATv2) model
for fast and accurate estimation of post-physical synthesis circuit delay and
area metrics from pre-physical synthesis circuit netlists. Once trained,
GraPhSyM provides accurate visibility of final design metrics to early EDA
stages, such as logic synthesis, without running the slow physical synthesis
flow, enabling global co-optimization across stages. Additionally, the swift
and precise feedback provided by GraPhSyM is instrumental for
machine-learning-based EDA optimization frameworks. Given a gate-level netlist
of a circuit represented as a graph, GraPhSyM utilizes graph structure,
connectivity, and electrical property features to predict the impact of
physical synthesis transformations such as buffer insertion and gate sizing.
When trained on a dataset of 6000 prefix adder designs synthesized at an
aggressive delay target, GraPhSyM can accurately predict the post-synthesis
delay (98.3%) and area (96.1%) metrics of unseen adders with a fast 0.22s
inference time. Furthermore, we illustrate the compositionality of GraPhSyM by
employing the model trained on a fixed delay target to accurately anticipate
post-synthesis metrics at a variety of unseen delay targets. Lastly, we report
promising generalization capabilities of the GraPhSyM model when it is
evaluated on circuits different from the adders it was exclusively trained on.
The results show the potential for GraPhSyM to serve as a powerful tool for
advanced optimization techniques and as an oracle for EDA machine learning
frameworks.
Related papers
- Automatic AI Model Selection for Wireless Systems: Online Learning via Digital Twinning [50.332027356848094]
AI-based applications are deployed at intelligent controllers to carry out functionalities like scheduling or power control.
The mapping between context and AI model parameters is ideally done in a zero-shot fashion.
This paper introduces a general methodology for the online optimization of AMS mappings.
arXiv Detail & Related papers (2024-06-22T11:17:50Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - A Machine Learning Approach to Improving Timing Consistency between
Global Route and Detailed Route [3.202646674984817]
Inaccurate timing prediction wastes design effort, hurts circuit performance, and may lead to design failure.
This work focuses on timing prediction after clock tree synthesis and placement legalization, which is the earliest opportunity to time and optimize a "complete" netlist.
To bridge the gap between GR-based parasitic and timing estimation and post-DR results during post-GR optimization, machine learning (ML)-based models are proposed.
arXiv Detail & Related papers (2023-05-11T16:01:23Z) - PhysFormer++: Facial Video-based Physiological Measurement with SlowFast
Temporal Difference Transformer [76.40106756572644]
Recent deep learning approaches focus on mining subtle clues using convolutional neural networks with limited-temporal receptive fields.
In this paper, we propose two end-to-end video transformer based on PhysFormer and Phys++++, to adaptively aggregate both local and global features for r representation enhancement.
Comprehensive experiments are performed on four benchmark datasets to show our superior performance on both intra-temporal and cross-dataset testing.
arXiv Detail & Related papers (2023-02-07T15:56:03Z) - Retrosynthetic Planning with Dual Value Networks [107.97218669277913]
We propose a novel online training algorithm, called Planning with Dual Value Networks (PDVN)
PDVN alternates between the planning phase and updating phase to predict the synthesizability and cost of molecules.
On the widely-used USPTO dataset, our PDVN algorithm improves the search success rate of existing multi-step planners.
arXiv Detail & Related papers (2023-01-31T16:43:53Z) - Self-learning locally-optimal hypertuning using maximum entropy, and
comparison of machine learning approaches for estimating fatigue life in
composite materials [0.0]
We develop an ML nearest-neighbors-alike algorithm based on the principle of maximum entropy to predict fatigue damage.
The predictions achieve a good level of accuracy, similar to other ML algorithms.
arXiv Detail & Related papers (2022-10-19T12:20:07Z) - A High Throughput Generative Vector Autoregression Model for Stochastic
Synapses [0.0]
We develop a high throughput generative model for synaptic arrays based on electrical measurement data for resistive memory cells.
We demonstrate array sizes above one billion cells and throughputs exceeding one hundred million weight updates per second, above the pixel rate of a 30 frames/s 4K video stream.
arXiv Detail & Related papers (2022-05-10T17:08:30Z) - Hybrid Graph Models for Logic Optimization via Spatio-Temporal
Information [15.850413267830522]
Two major concerns that may impede production-ready ML applications in EDA are accuracy requirements and generalization capability.
We propose hybrid graph neural network (GNN) based approaches towards highly accurate quality-of-result (QoR) estimations.
Evaluation on 3.3 million data points shows that the testing mean absolute percentage error (MAPE) on designs seen unseen during training are no more than 1.2% and 3.1%.
arXiv Detail & Related papers (2022-01-20T21:12:22Z) - A Graph Deep Learning Framework for High-Level Synthesis Design Space
Exploration [11.154086943903696]
High-Level Synthesis is a solution for fast prototyping application-specific hardware.
We propose HLS, for the first time in the literature, graph neural networks that jointly predict acceleration performance and hardware costs.
We show that our approach achieves prediction accuracy comparable with that of commonly used simulators.
arXiv Detail & Related papers (2021-11-29T18:17:45Z) - PhysFormer: Facial Video-based Physiological Measurement with Temporal
Difference Transformer [55.936527926778695]
Recent deep learning approaches focus on mining subtle r clues using convolutional neural networks with limited-temporal receptive fields.
In this paper, we propose the PhysFormer, an end-to-end video transformer based architecture.
arXiv Detail & Related papers (2021-11-23T18:57:11Z) - A Generative Learning Approach for Spatio-temporal Modeling in Connected
Vehicular Network [55.852401381113786]
This paper proposes LaMI (Latency Model Inpainting), a novel framework to generate a comprehensive-temporal quality framework for wireless access latency of connected vehicles.
LaMI adopts the idea from image inpainting and synthesizing and can reconstruct the missing latency samples by a two-step procedure.
In particular, it first discovers the spatial correlation between samples collected in various regions using a patching-based approach and then feeds the original and highly correlated samples into a Varienational Autocoder (VAE)
arXiv Detail & Related papers (2020-03-16T03:43:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.