Computing Steiner Trees using Graph Neural Networks
- URL: http://arxiv.org/abs/2108.08368v1
- Date: Wed, 18 Aug 2021 19:55:16 GMT
- Title: Computing Steiner Trees using Graph Neural Networks
- Authors: Reyan Ahmed, Md Asadullah Turja, Faryad Darabi Sahneh, Mithun Ghosh,
Keaton Hamm, and Stephen Kobourov
- Abstract summary: In this paper, we tackle the Steiner Tree Problem.
We employ four learning frameworks to compute low cost Steiner trees.
Our finding suggests that the out-of-the-box application of GNN methods does worse than the classic 2-approximation algorithm.
- Score: 1.0159681653887238
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks have been successful in many learning problems and
real-world applications. A recent line of research explores the power of graph
neural networks to solve combinatorial and graph algorithmic problems such as
subgraph isomorphism, detecting cliques, and the traveling salesman problem.
However, many NP-complete problems are as of yet unexplored using this method.
In this paper, we tackle the Steiner Tree Problem. We employ four learning
frameworks to compute low cost Steiner trees: feed-forward neural networks,
graph neural networks, graph convolutional networks, and a graph attention
model. We use these frameworks in two fundamentally different ways: 1) to train
the models to learn the actual Steiner tree nodes, 2) to train the model to
learn good Steiner point candidates to be connected to the constructed tree
using a shortest path in a greedy fashion. We illustrate the robustness of our
heuristics on several random graph generation models as well as the SteinLib
data library. Our finding suggests that the out-of-the-box application of GNN
methods does worse than the classic 2-approximation method. However, when
combined with a greedy shortest path construction, it even does slightly better
than the 2-approximation algorithm. This result sheds light on the fundamental
capabilities and limitations of graph learning techniques on classical
NP-complete problems.
Related papers
- LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Graph Sparsifications using Neural Network Assisted Monte Carlo Tree
Search [9.128418945452088]
We describe an approach for computing graph sparsifiers by combining a graph neural network and Monte Carlo Tree Search.
We first train a graph neural network that takes as input a partial solution and proposes a new node to be added as output.
This neural network is then used in a Monte Carlo search to compute a sparsifier.
arXiv Detail & Related papers (2023-11-17T03:59:50Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Nearly Optimal Steiner Trees using Graph Neural Network Assisted Monte
Carlo Tree Search [9.061356032792952]
We describe an approach for computing Steiner Trees by combining a graph neural network and Monte Carlo Tree Search.
We first train a graph neural network that takes as input a partial solution and proposes a new node to be added as output.
This neural network is then used in a Monte Carlo search to compute a Steiner tree.
arXiv Detail & Related papers (2023-04-30T17:15:38Z) - Learning to Compare Nodes in Branch and Bound with Graph Neural Networks [5.08128537391027]
Branch-and-bound approaches in integer programming require ordering portions of the space to explore next.
We propose a new siamese graph neural network model to tackle this problem, where the nodes are represented as bipartite graphs with attributes.
We evaluate our method by solving the instances in a plain framework where the nodes are explored according to their rank.
arXiv Detail & Related papers (2022-10-30T19:38:23Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Graph-level Neural Networks: Current Progress and Future Directions [61.08696673768116]
Graph-level Neural Networks (GLNNs, deep learning-based graph-level learning methods) have been attractive due to their superiority in modeling high-dimensional data.
We propose a systematic taxonomy covering GLNNs upon deep neural networks, graph neural networks, and graph pooling.
arXiv Detail & Related papers (2022-05-31T06:16:55Z) - Vulcan: Solving the Steiner Tree Problem with Graph Neural Networks and
Deep Reinforcement Learning [2.419365674940108]
We design a novel model Vulcan based on novel graph networks and deep reinforcement learning.
Vulcan can also find solutions to a wide range of NP-hard problems.
arXiv Detail & Related papers (2021-11-21T12:53:50Z) - Deep Neural Network for DrawiNg Networks, (DNN)^2 [1.5749416770494706]
We present a novel graph drawing framework called (DNN)2, Deep Neural Network for DrawiNg Networks.
We show that (DNN)2 performs well and are encouraging as the Deep Learning approach to Graph Drawing is novel and many leads for future works are identified.
arXiv Detail & Related papers (2021-08-08T13:23:26Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Node2Seq: Towards Trainable Convolutions in Graph Neural Networks [59.378148590027735]
We propose a graph network layer, known as Node2Seq, to learn node embeddings with explicitly trainable weights for different neighboring nodes.
For a target node, our method sorts its neighboring nodes via attention mechanism and then employs 1D convolutional neural networks (CNNs) to enable explicit weights for information aggregation.
In addition, we propose to incorporate non-local information for feature learning in an adaptive manner based on the attention scores.
arXiv Detail & Related papers (2021-01-06T03:05:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.