PreRoutGNN for Timing Prediction with Order Preserving Partition: Global
Circuit Pre-training, Local Delay Learning and Attentional Cell Modeling
- URL: http://arxiv.org/abs/2403.00012v2
- Date: Tue, 12 Mar 2024 12:59:45 GMT
- Title: PreRoutGNN for Timing Prediction with Order Preserving Partition: Global
Circuit Pre-training, Local Delay Learning and Attentional Cell Modeling
- Authors: Ruizhe Zhong, Junjie Ye, Zhentao Tang, Shixiong Kai, Mingxuan Yuan,
Jianye Hao, Junchi Yan
- Abstract summary: We propose a two-stage approach to pre-routing timing prediction.
First, we propose global circuit training to pre-train a graph auto-encoder that learns the global graph embedding from circuit netlist.
Second, we use a novel node updating scheme for message passing on GCN, following the topological sorting sequence of the learned graph embedding and circuit graph.
Experiments on 21 real world circuits achieve a new SOTA R2 of 0.93 for slack prediction, which is significantly surpasses 0.59 by previous SOTA method.
- Score: 84.34811206119619
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pre-routing timing prediction has been recently studied for evaluating the
quality of a candidate cell placement in chip design. It involves directly
estimating the timing metrics for both pin-level (slack, slew) and edge-level
(net delay, cell delay), without time-consuming routing. However, it often
suffers from signal decay and error accumulation due to the long timing paths
in large-scale industrial circuits. To address these challenges, we propose a
two-stage approach. First, we propose global circuit training to pre-train a
graph auto-encoder that learns the global graph embedding from circuit netlist.
Second, we use a novel node updating scheme for message passing on GCN,
following the topological sorting sequence of the learned graph embedding and
circuit graph. This scheme residually models the local time delay between two
adjacent pins in the updating sequence, and extracts the lookup table
information inside each cell via a new attention mechanism. To handle
large-scale circuits efficiently, we introduce an order preserving partition
scheme that reduces memory consumption while maintaining the topological
dependencies. Experiments on 21 real world circuits achieve a new SOTA R2 of
0.93 for slack prediction, which is significantly surpasses 0.59 by previous
SOTA method. Code will be available at:
https://github.com/Thinklab-SJTU/EDA-AI.
Related papers
- Networked Time Series Imputation via Position-aware Graph Enhanced
Variational Autoencoders [31.953958053709805]
We design a new model named PoGeVon which leverages variational autoencoder (VAE) to predict missing values over both node time series features and graph structures.
Experiment results demonstrate the effectiveness of our model over baselines.
arXiv Detail & Related papers (2023-05-29T21:11:34Z) - CARD: Channel Aligned Robust Blend Transformer for Time Series
Forecasting [50.23240107430597]
We design a special Transformer, i.e., Channel Aligned Robust Blend Transformer (CARD for short), that addresses key shortcomings of CI type Transformer in time series forecasting.
First, CARD introduces a channel-aligned attention structure that allows it to capture both temporal correlations among signals.
Second, in order to efficiently utilize the multi-scale knowledge, we design a token blend module to generate tokens with different resolutions.
Third, we introduce a robust loss function for time series forecasting to alleviate the potential overfitting issue.
arXiv Detail & Related papers (2023-05-20T05:16:31Z) - Spatial-Temporal Adaptive Graph Convolution with Attention Network for
Traffic Forecasting [4.1700160312787125]
We propose a novel network, Spatial-Temporal Adaptive graph convolution with Attention Network (STAAN) for traffic forecasting.
Firstly, we adopt an adaptive dependency matrix instead of using a pre-defined matrix during GCN processing to infer the inter-dependencies among nodes.
Secondly, we integrate PW-attention based on graph attention network which is designed for global dependency, and GCN as spatial block.
arXiv Detail & Related papers (2022-06-07T09:08:35Z) - Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling
and Design [68.1682448368636]
We present a supervised pretraining approach to learn circuit representations that can be adapted to new unseen topologies or unseen prediction tasks.
To cope with the variable topological structure of different circuits we describe each circuit as a graph and use graph neural networks (GNNs) to learn node embeddings.
We show that pretraining GNNs on prediction of output node voltages can encourage learning representations that can be adapted to new unseen topologies or prediction of new circuit level properties.
arXiv Detail & Related papers (2022-03-29T21:18:47Z) - Generalizable Cross-Graph Embedding for GNN-based Congestion Prediction [22.974348682859322]
We propose a framework that can directly learn embeddings for the given netlist to enhance the quality of our node features.
By combining the learned embedding on top of the netlist with the GNNs, our method improves prediction performance, generalizes to new circuit lines, and is efficient in training, potentially saving over $90 %$ of runtime.
arXiv Detail & Related papers (2021-11-10T20:56:29Z) - Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid
Precoding [94.40747235081466]
We propose an end-to-end deep learning-based joint transceiver design algorithm for millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems.
We develop a DNN architecture that maps the received pilots into feedback bits at the receiver, and then further maps the feedback bits into the hybrid precoder at the transmitter.
arXiv Detail & Related papers (2021-10-22T20:49:02Z) - Online learning of windmill time series using Long Short-term Cognitive
Networks [58.675240242609064]
The amount of data generated on windmill farms makes online learning the most viable strategy to follow.
We use Long Short-term Cognitive Networks (LSTCNs) to forecast windmill time series in online settings.
Our approach reported the lowest forecasting errors with respect to a simple RNN, a Long Short-term Memory, a Gated Recurrent Unit, and a Hidden Markov Model.
arXiv Detail & Related papers (2021-07-01T13:13:24Z) - Multi-Time-Scale Input Approaches for Hourly-Scale Rainfall-Runoff
Modeling based on Recurrent Neural Networks [0.0]
Two approaches are proposed to reduce the required computational time for time-series modeling through a recurrent neural network (RNN)
One approach provides coarse fine temporal resolutions of the input time-series to RNN in parallel.
The results confirm that both of the proposed approaches can reduce the computational time for the training of RNN significantly.
arXiv Detail & Related papers (2021-01-30T07:51:55Z) - Short-Term Memory Optimization in Recurrent Neural Networks by
Autoencoder-based Initialization [79.42778415729475]
We explore an alternative solution based on explicit memorization using linear autoencoders for sequences.
We show how such pretraining can better support solving hard classification tasks with long sequences.
We show that the proposed approach achieves a much lower reconstruction error for long sequences and a better gradient propagation during the finetuning phase.
arXiv Detail & Related papers (2020-11-05T14:57:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.