LHNN: Lattice Hypergraph Neural Network for VLSI Congestion Prediction
- URL: http://arxiv.org/abs/2203.12831v1
- Date: Thu, 24 Mar 2022 03:31:18 GMT
- Title: LHNN: Lattice Hypergraph Neural Network for VLSI Congestion Prediction
- Authors: Bowen Wang, Guibao Shen, Dong Li, Jianye Hao, Wulong Liu, Yu Huang,
Hongzhong Wu, Yibo Lin, Guangyong Chen, Pheng Ann Heng
- Abstract summary: lattice hypergraph (LH-graph) is a novel graph formulation for circuits.
LHNN constantly achieves more than 35% improvements compared with U-nets and Pix2Pix on the F1 score.
- Score: 70.31656245793302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Precise congestion prediction from a placement solution plays a crucial role
in circuit placement. This work proposes the lattice hypergraph (LH-graph), a
novel graph formulation for circuits, which preserves netlist data during the
whole learning process, and enables the congestion information propagated
geometrically and topologically. Based on the formulation, we further developed
a heterogeneous graph neural network architecture LHNN, jointing the routing
demand regression to support the congestion spot classification. LHNN
constantly achieves more than 35% improvements compared with U-nets and Pix2Pix
on the F1 score. We expect our work shall highlight essential procedures using
machine learning for congestion prediction.
Related papers
- Graph Pruning Based Spatial and Temporal Graph Convolutional Network with Transfer Learning for Traffic Prediction [0.0]
This study proposes a novel Spatial-temporal Convolutional Network (TL-GPSTGN) based on graph pruning and transfer learning framework.
The results demonstrate the exceptional predictive accuracy of TL-GPSTGN on a single dataset, as well as its robust migration performance across different datasets.
arXiv Detail & Related papers (2024-09-25T00:59:23Z) - DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - Simpler is better: Multilevel Abstraction with Graph Convolutional
Recurrent Neural Network Cells for Traffic Prediction [6.968068088508505]
We present a new sequence-to-sequence architecture for graph neural networks (GNNs)
We also present a new benchmark benchmark dataset of street-level segment data in Montreal, Canada.
Our model improves performance by more than 7% for one-hour prediction compared to the baseline methods.
arXiv Detail & Related papers (2022-09-08T14:56:29Z) - Invertible Neural Networks for Graph Prediction [22.140275054568985]
In this work, we address conditional generation using deep invertible neural networks.
We adopt an end-to-end training approach since our objective is to address prediction and generation in the forward and backward processes at once.
arXiv Detail & Related papers (2022-06-02T17:28:33Z) - Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling
and Design [68.1682448368636]
We present a supervised pretraining approach to learn circuit representations that can be adapted to new unseen topologies or unseen prediction tasks.
To cope with the variable topological structure of different circuits we describe each circuit as a graph and use graph neural networks (GNNs) to learn node embeddings.
We show that pretraining GNNs on prediction of output node voltages can encourage learning representations that can be adapted to new unseen topologies or prediction of new circuit level properties.
arXiv Detail & Related papers (2022-03-29T21:18:47Z) - Training Graph Neural Networks by Graphon Estimation [2.5997274006052544]
We propose to train a graph neural network via resampling from a graphon estimate obtained from the underlying network data.
We show that our approach is competitive with and in many cases outperform the other over-smoothing reducing GNN training methods.
arXiv Detail & Related papers (2021-09-04T19:21:48Z) - Implementing a foveal-pit inspired filter in a Spiking Convolutional
Neural Network: a preliminary study [0.0]
We have presented a Spiking Convolutional Neural Network (SCNN) that incorporates retinal foveal-pit inspired Difference of Gaussian filters and rank-order encoding.
The model is trained using a variant of the backpropagation algorithm adapted to work with spiking neurons, as implemented in the Nengo library.
The network has achieved up to 90% accuracy, where loss is calculated using the cross-entropy function.
arXiv Detail & Related papers (2021-05-29T15:28:30Z) - Overcoming Catastrophic Forgetting in Graph Neural Networks [50.900153089330175]
Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks.
We propose a novel scheme dedicated to overcoming this problem and hence strengthen continual learning in graph neural networks (GNNs)
At the heart of our approach is a generic module, termed as topology-aware weight preserving(TWP)
arXiv Detail & Related papers (2020-12-10T22:30:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.