Generalizable Cross-Graph Embedding for GNN-based Congestion Prediction
- URL: http://arxiv.org/abs/2111.05941v1
- Date: Wed, 10 Nov 2021 20:56:29 GMT
- Title: Generalizable Cross-Graph Embedding for GNN-based Congestion Prediction
- Authors: Amur Ghose, Vincent Zhang, Yingxue Zhang, Dong Li, Wulong Liu, Mark
Coates
- Abstract summary: We propose a framework that can directly learn embeddings for the given netlist to enhance the quality of our node features.
By combining the learned embedding on top of the netlist with the GNNs, our method improves prediction performance, generalizes to new circuit lines, and is efficient in training, potentially saving over $90 %$ of runtime.
- Score: 22.974348682859322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Presently with technology node scaling, an accurate prediction model at early
design stages can significantly reduce the design cycle. Especially during
logic synthesis, predicting cell congestion due to improper logic combination
can reduce the burden of subsequent physical implementations. There have been
attempts using Graph Neural Network (GNN) techniques to tackle congestion
prediction during the logic synthesis stage. However, they require informative
cell features to achieve reasonable performance since the core idea of GNNs is
built on the message passing framework, which would be impractical at the early
logic synthesis stage. To address this limitation, we propose a framework that
can directly learn embeddings for the given netlist to enhance the quality of
our node features. Popular random-walk based embedding methods such as
Node2vec, LINE, and DeepWalk suffer from the issue of cross-graph alignment and
poor generalization to unseen netlist graphs, yielding inferior performance and
costing significant runtime. In our framework, we introduce a superior
alternative to obtain node embeddings that can generalize across netlist graphs
using matrix factorization methods. We propose an efficient mini-batch training
method at the sub-graph level that can guarantee parallel training and satisfy
the memory restriction for large-scale netlists. We present results utilizing
open-source EDA tools such as DREAMPLACE and OPENROAD frameworks on a variety
of openly available circuits. By combining the learned embedding on top of the
netlist with the GNNs, our method improves prediction performance, generalizes
to new circuit lines, and is efficient in training, potentially saving over $90
\%$ of runtime.
Related papers
- Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Fast and Effective GNN Training with Linearized Random Spanning Trees [20.73637495151938]
We present a new effective and scalable framework for training GNNs in node classification tasks.
Our approach progressively refines the GNN weights on an extensive sequence of random spanning trees.
The sparse nature of these path graphs substantially lightens the computational burden of GNN training.
arXiv Detail & Related papers (2023-06-07T23:12:42Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling
and Design [68.1682448368636]
We present a supervised pretraining approach to learn circuit representations that can be adapted to new unseen topologies or unseen prediction tasks.
To cope with the variable topological structure of different circuits we describe each circuit as a graph and use graph neural networks (GNNs) to learn node embeddings.
We show that pretraining GNNs on prediction of output node voltages can encourage learning representations that can be adapted to new unseen topologies or prediction of new circuit level properties.
arXiv Detail & Related papers (2022-03-29T21:18:47Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Analyzing the Performance of Graph Neural Networks with Pipe Parallelism [2.269587850533721]
We focus on Graph Neural Networks (GNNs) that have found great success in tasks such as node or edge classification and link prediction.
New approaches for processing larger networks are needed to advance graph techniques.
We study how GNNs could be parallelized using existing tools and frameworks that are known to be successful in the deep learning community.
arXiv Detail & Related papers (2020-12-20T04:20:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.