Modeling Gate-Level Abstraction Hierarchy Using Graph Convolutional
Neural Networks to Predict Functional De-Rating Factors
- URL: http://arxiv.org/abs/2104.01812v1
- Date: Mon, 5 Apr 2021 08:38:16 GMT
- Title: Modeling Gate-Level Abstraction Hierarchy Using Graph Convolutional
Neural Networks to Predict Functional De-Rating Factors
- Authors: Aneesh Balakrishnan, Thomas Lange, Maximilien Glorieux, Dan
Alexandrescu and Maksim Jenihhin
- Abstract summary: The paper proposes a methodology for modeling a gate-level netlist using a Graph Convolutional Network (GCN)
The model predicts the overall functional de-rating factors of sequential elements of a given circuit.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The paper is proposing a methodology for modeling a gate-level netlist using
a Graph Convolutional Network (GCN). The model predicts the overall functional
de-rating factors of sequential elements of a given circuit. In the preliminary
phase of the work, the important goal is making a GCN which able to take a
gate-level netlist as input information after transforming it into the
Probabilistic Bayesian Graph in the form of Graph Modeling Language (GML). This
part enables the GCN to learn the structural information of netlist in graph
domains. In the second phase of the work, the modeled GCN trained with the a
functional de-rating factor of a very low number of individual sequential
elements (flip-flops). The third phase includes understanding of GCN models
accuracy to model an arbitrary circuit netlist. The designed model was
validated for two circuits. One is the IEEE 754 standard double precision
floating point adder and the second one is the 10-Gigabit Ethernet MAC
IEEE802.3 standard. The predicted results compared to the standard fault
injection campaign results of the error called Single EventUpset (SEU). The
validated results are graphically pictured in the form of the histogram and
sorted probabilities and evaluated with the Confidence Interval (CI) metric
between the predicted and simulated fault injection results.
Related papers
- Topology-Agnostic Graph U-Nets for Scalar Field Prediction on Unstructured Meshes [2.4306216325375196]
TAG U-Net is a graph convolutional network that can be trained to input any mesh or graph structure.
The model constructs coarsened versions of each input graph and performs a set of convolution and pooling operations to predict the node-wise outputs on the original graph.
arXiv Detail & Related papers (2024-10-08T22:27:35Z) - Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Graph Mining under Data scarcity [6.229055041065048]
We propose an Uncertainty Estimator framework that can be applied on top of any generic Graph Neural Networks (GNNs)
We train these models under the classic episodic learning paradigm in the $n$-way, $k$-shot fashion, in an end-to-end setting.
Our method outperforms the baselines, which demonstrates the efficacy of the Uncertainty Estimator for Few-shot node classification on graphs with a GNN.
arXiv Detail & Related papers (2024-06-07T10:50:03Z) - Power Failure Cascade Prediction using Graph Neural Networks [4.667031410586657]
We propose a flow-free model that predicts grid states at every generation of a cascade process given an initial contingency and power injection values.
We show that the proposed model reduces the computational time by almost two orders of magnitude.
arXiv Detail & Related papers (2024-04-24T18:45:50Z) - GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels [81.93520935479984]
We study a new problem, GNN model evaluation, that aims to assess the performance of a specific GNN model trained on labeled and observed graphs.
We propose a two-stage GNN model evaluation framework, including (1) DiscGraph set construction and (2) GNNEvaluator training and inference.
Under the effective training supervision from the DiscGraph set, GNNEvaluator learns to precisely estimate node classification accuracy of the to-be-evaluated GNN model.
arXiv Detail & Related papers (2023-10-23T05:51:59Z) - Networked Time Series Imputation via Position-aware Graph Enhanced
Variational Autoencoders [31.953958053709805]
We design a new model named PoGeVon which leverages variational autoencoder (VAE) to predict missing values over both node time series features and graph structures.
Experiment results demonstrate the effectiveness of our model over baselines.
arXiv Detail & Related papers (2023-05-29T21:11:34Z) - GCN-FFNN: A Two-Stream Deep Model for Learning Solution to Partial
Differential Equations [3.5665681694253903]
This paper introduces a novel two-stream deep model based on graph convolutional network (GCN) architecture and feed-forward neural networks (FFNN)
The proposed GCN-FFNN model learns from two types of input representations, i.e. grid and graph data, obtained via the discretization of the PDE domain.
The obtained numerical results demonstrate the applicability and efficiency of the proposed GCN-FFNN model over individual GCN and FFNN models.
arXiv Detail & Related papers (2022-04-28T19:16:31Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - An Introduction to Robust Graph Convolutional Networks [71.68610791161355]
We propose a novel Robust Graph Convolutional Neural Networks for possible erroneous single-view or multi-view data.
By incorporating an extra layers via Autoencoders into traditional graph convolutional networks, we characterize and handle typical error models explicitly.
arXiv Detail & Related papers (2021-03-27T04:47:59Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Revisiting Graph based Collaborative Filtering: A Linear Residual Graph
Convolutional Network Approach [55.44107800525776]
Graph Convolutional Networks (GCNs) are state-of-the-art graph based representation learning models.
In this paper, we revisit GCN based Collaborative Filtering (CF) based Recommender Systems (RS)
We show that removing non-linearities would enhance recommendation performance, consistent with the theories in simple graph convolutional networks.
We propose a residual network structure that is specifically designed for CF with user-item interaction modeling.
arXiv Detail & Related papers (2020-01-28T04:41:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.