Vulcan: Solving the Steiner Tree Problem with Graph Neural Networks and
Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2111.10810v1
- Date: Sun, 21 Nov 2021 12:53:50 GMT
- Title: Vulcan: Solving the Steiner Tree Problem with Graph Neural Networks and
Deep Reinforcement Learning
- Authors: Haizhou Du and Zong Yan and Qiao Xiang and Qinqing Zhan
- Abstract summary: We design a novel model Vulcan based on novel graph networks and deep reinforcement learning.
Vulcan can also find solutions to a wide range of NP-hard problems.
- Score: 2.419365674940108
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Steiner Tree Problem (STP) in graphs aims to find a tree of minimum weight in
the graph that connects a given set of vertices. It is a classic NP-hard
combinatorial optimization problem and has many real-world applications (e.g.,
VLSI chip design, transportation network planning and wireless sensor
networks). Many exact and approximate algorithms have been developed for STP,
but they suffer from high computational complexity and weak worst-case solution
guarantees, respectively. Heuristic algorithms are also developed. However,
each of them requires application domain knowledge to design and is only
suitable for specific scenarios. Motivated by the recently reported observation
that instances of the same NP-hard combinatorial problem may maintain the same
or similar combinatorial structure but mainly differ in their data, we
investigate the feasibility and benefits of applying machine learning
techniques to solving STP. To this end, we design a novel model Vulcan based on
novel graph neural networks and deep reinforcement learning. The core of Vulcan
is a novel, compact graph embedding that transforms highdimensional graph
structure data (i.e., path-changed information) into a low-dimensional vector
representation. Given an STP instance, Vulcan uses this embedding to encode its
pathrelated information and sends the encoded graph to a deep reinforcement
learning component based on a double deep Q network (DDQN) to find solutions.
In addition to STP, Vulcan can also find solutions to a wide range of NP-hard
problems (e.g., SAT, MVC and X3C) by reducing them to STP. We implement a
prototype of Vulcan and demonstrate its efficacy and efficiency with extensive
experiments using real-world and synthetic datasets.
Related papers
- GradINN: Gradient Informed Neural Network [2.287415292857564]
We propose a methodology inspired by Physics Informed Neural Networks (PINNs)
GradINNs leverage prior beliefs about a system's gradient to constrain the predicted function's gradient across all input dimensions.
We demonstrate the advantages of GradINNs, particularly in low-data regimes, on diverse problems spanning non time-dependent systems.
arXiv Detail & Related papers (2024-09-03T14:03:29Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - DRGCN: Dynamic Evolving Initial Residual for Deep Graph Convolutional
Networks [19.483662490506646]
We propose a novel model called Dynamic evolving initial Residual Graph Convolutional Network (DRGCN)
Our experimental results show that our model effectively relieves the problem of over-smoothing in deep GCNs.
Our model reaches new SOTA results on the large-scale ogbn-arxiv dataset of Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-02-10T06:57:12Z) - A Differentiable Approach to Combinatorial Optimization using Dataless
Neural Networks [20.170140039052455]
We propose a radically different approach in that no data is required for training the neural networks that produce the solution.
In particular, we reduce the optimization problem to a neural network and employ a dataless training scheme to refine the parameters of the network such that those parameters yield the structure of interest.
arXiv Detail & Related papers (2022-03-15T19:21:31Z) - Neural combinatorial optimization beyond the TSP: Existing architectures
under-represent graph structure [9.673093148930876]
We analyze how and whether recent neural architectures can be applied to graph problems of practical importance.
We show that augmenting the structural representation of problems with Distance is a promising step towards the still-ambitious goal of learning multi-purpose autonomous solvers.
arXiv Detail & Related papers (2022-01-03T14:14:28Z) - Learning to Detect Critical Nodes in Sparse Graphs via Feature Importance Awareness [53.351863569314794]
The critical node problem (CNP) aims to find a set of critical nodes from a network whose deletion maximally degrades the pairwise connectivity of the residual network.
This work proposes a feature importance-aware graph attention network for node representation.
It combines it with dueling double deep Q-network to create an end-to-end algorithm to solve CNP for the first time.
arXiv Detail & Related papers (2021-12-03T14:23:05Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - Computing Steiner Trees using Graph Neural Networks [1.0159681653887238]
In this paper, we tackle the Steiner Tree Problem.
We employ four learning frameworks to compute low cost Steiner trees.
Our finding suggests that the out-of-the-box application of GNN methods does worse than the classic 2-approximation algorithm.
arXiv Detail & Related papers (2021-08-18T19:55:16Z) - An Introduction to Robust Graph Convolutional Networks [71.68610791161355]
We propose a novel Robust Graph Convolutional Neural Networks for possible erroneous single-view or multi-view data.
By incorporating an extra layers via Autoencoders into traditional graph convolutional networks, we characterize and handle typical error models explicitly.
arXiv Detail & Related papers (2021-03-27T04:47:59Z) - Geometrically Principled Connections in Graph Neural Networks [66.51286736506658]
We argue geometry should remain the primary driving force behind innovation in the emerging field of geometric deep learning.
We relate graph neural networks to widely successful computer graphics and data approximation models: radial basis functions (RBFs)
We introduce affine skip connections, a novel building block formed by combining a fully connected layer with any graph convolution operator.
arXiv Detail & Related papers (2020-04-06T13:25:46Z) - Graph Ordering: Towards the Optimal by Learning [69.72656588714155]
Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
arXiv Detail & Related papers (2020-01-18T09:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.