Reinforcement Learning for Scalable Logic Optimization with Graph Neural
Networks
- URL: http://arxiv.org/abs/2105.01755v1
- Date: Tue, 4 May 2021 20:51:54 GMT
- Title: Reinforcement Learning for Scalable Logic Optimization with Graph Neural
Networks
- Authors: Xavier Timoneda, Lukas Cavigelli
- Abstract summary: We propose to combine graph convolutional networks with reinforcement learning and a novel, scalable node embedding method to learn which local transforms should be applied to the logic graph.
We show that this method achieves a similar size reduction as ABC on smaller circuits and outperforms it by 1.5-1.75x on larger random graphs.
- Score: 5.156191363676411
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Logic optimization is an NP-hard problem commonly approached through
hand-engineered heuristics. We propose to combine graph convolutional networks
with reinforcement learning and a novel, scalable node embedding method to
learn which local transforms should be applied to the logic graph. We show that
this method achieves a similar size reduction as ABC on smaller circuits and
outperforms it by 1.5-1.75x on larger random graphs.
Related papers
- Unifews: Unified Entry-Wise Sparsification for Efficient Graph Neural Network [10.556366638048384]
Graph Neural Networks (GNNs) have shown promising performance in various graph learning tasks, but at the cost of resource-intensive computations.
Previous studies attempt to reduce the computational budget by leveraging graph-level or network-level sparsification techniques.
We propose Unifews, which unifies the two operations in an entry-wise manner considering individual matrix elements.
arXiv Detail & Related papers (2024-03-20T03:07:30Z) - Ensemble Quadratic Assignment Network for Graph Matching [52.20001802006391]
Graph matching is a commonly used technique in computer vision and pattern recognition.
Recent data-driven approaches have improved the graph matching accuracy remarkably.
We propose a graph neural network (GNN) based approach to combine the advantages of data-driven and traditional methods.
arXiv Detail & Related papers (2024-03-11T06:34:05Z) - CoRe-GD: A Hierarchical Framework for Scalable Graph Visualization with
GNNs [20.706469085872516]
We introduce a scalable Graph Neural Network (GNN) based Graph Drawing framework with sub-quadratic that can learn to optimize stress.
Inspired by classical stress optimization techniques and force-directed layout algorithms, we create a coarsening hierarchy for the input graph.
To enhance information propagation within the network, we propose a novel positional rewiring technique.
arXiv Detail & Related papers (2024-02-09T10:50:45Z) - X-RLflow: Graph Reinforcement Learning for Neural Network Subgraphs
Transformation [0.0]
graph superoptimisation systems perform a sequence of subgraph substitution to neural networks to find the optimal computation graph structure.
We show that our approach can outperform state-of-the-art superoptimisation systems over a range of deep learning models and achieve by up to 40% on those that are based on transformer-style architectures.
arXiv Detail & Related papers (2023-04-28T09:06:18Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - GraphTheta: A Distributed Graph Neural Network Learning System With
Flexible Training Strategy [5.466414428765544]
We present a new distributed graph learning system GraphTheta.
It supports multiple training strategies and enables efficient and scalable learning on big graphs.
This work represents the largest edge-attributed GNN learning task conducted on a billion-scale network in the literature.
arXiv Detail & Related papers (2021-04-21T14:51:33Z) - Unrolling of Deep Graph Total Variation for Image Denoising [106.93258903150702]
In this paper, we combine classical graph signal filtering with deep feature learning into a competitive hybrid design.
We employ interpretable analytical low-pass graph filters and employ 80% fewer network parameters than state-of-the-art DL denoising scheme DnCNN.
arXiv Detail & Related papers (2020-10-21T20:04:22Z) - Iterative Deep Graph Learning for Graph Neural Networks: Better and
Robust Node Embeddings [53.58077686470096]
We propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL) for jointly and iteratively learning graph structure and graph embedding.
Our experiments show that our proposed IDGL models can consistently outperform or match the state-of-the-art baselines.
arXiv Detail & Related papers (2020-06-21T19:49:15Z) - Graph Ordering: Towards the Optimal by Learning [69.72656588714155]
Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
arXiv Detail & Related papers (2020-01-18T09:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.