Graph Sparsifications using Neural Network Assisted Monte Carlo Tree
Search
- URL: http://arxiv.org/abs/2311.10316v1
- Date: Fri, 17 Nov 2023 03:59:50 GMT
- Title: Graph Sparsifications using Neural Network Assisted Monte Carlo Tree
Search
- Authors: Alvin Chiu, Mithun Ghosh, Reyan Ahmed, Kwang-Sung Jun, Stephen
Kobourov, Michael T. Goodrich
- Abstract summary: We describe an approach for computing graph sparsifiers by combining a graph neural network and Monte Carlo Tree Search.
We first train a graph neural network that takes as input a partial solution and proposes a new node to be added as output.
This neural network is then used in a Monte Carlo search to compute a sparsifier.
- Score: 9.128418945452088
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks have been successful for machine learning, as well as
for combinatorial and graph problems such as the Subgraph Isomorphism Problem
and the Traveling Salesman Problem. We describe an approach for computing graph
sparsifiers by combining a graph neural network and Monte Carlo Tree Search. We
first train a graph neural network that takes as input a partial solution and
proposes a new node to be added as output. This neural network is then used in
a Monte Carlo search to compute a sparsifier. The proposed method consistently
outperforms several standard approximation algorithms on different types of
graphs and often finds the optimal solution.
Related papers
- Ensemble Quadratic Assignment Network for Graph Matching [52.20001802006391]
Graph matching is a commonly used technique in computer vision and pattern recognition.
Recent data-driven approaches have improved the graph matching accuracy remarkably.
We propose a graph neural network (GNN) based approach to combine the advantages of data-driven and traditional methods.
arXiv Detail & Related papers (2024-03-11T06:34:05Z) - GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - Nearly Optimal Steiner Trees using Graph Neural Network Assisted Monte
Carlo Tree Search [9.061356032792952]
We describe an approach for computing Steiner Trees by combining a graph neural network and Monte Carlo Tree Search.
We first train a graph neural network that takes as input a partial solution and proposes a new node to be added as output.
This neural network is then used in a Monte Carlo search to compute a Steiner tree.
arXiv Detail & Related papers (2023-04-30T17:15:38Z) - Subgraph Matching via Query-Conditioned Subgraph Matching Neural
Networks and Bi-Level Tree Search [33.9052190473029]
Subgraph Matching is a core operation in graph database search, biomedical analysis, social group finding, etc.
In this paper, we propose a novel encoder-decoder neural network architecture to dynamically compute the matching information between the query and the target graphs.
Experiments on five large real-world target graphs show that N-BLS can significantly improve the subgraph matching performance.
arXiv Detail & Related papers (2022-07-21T04:47:21Z) - A generative neural network model for random dot product graphs [1.1421942894219896]
GraphMoE is a novel approach to learning generative models for random graphs.
The neural network is trained to match the distribution of a class of random graphs by way of a moment estimator.
arXiv Detail & Related papers (2022-04-15T19:59:22Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Self-Supervised Deep Graph Embedding with High-Order Information Fusion
for Community Discovery [3.6002285517472767]
The proposed algorithm uses self-supervised mechanism and different high-order information of graph to train multiple deep graph convolution neural networks.
The outputs of multiple graph convolution neural networks are fused to extract the representations of nodes which include the attribute and structure information of a graph.
arXiv Detail & Related papers (2021-02-05T17:22:28Z) - Line Graph Neural Networks for Link Prediction [71.00689542259052]
We consider the graph link prediction task, which is a classic graph analytical problem with many real-world applications.
In this formalism, a link prediction problem is converted to a graph classification task.
We propose to seek a radically different and novel path by making use of the line graphs in graph theory.
In particular, each node in a line graph corresponds to a unique edge in the original graph. Therefore, link prediction problems in the original graph can be equivalently solved as a node classification problem in its corresponding line graph, instead of a graph classification task.
arXiv Detail & Related papers (2020-10-20T05:54:31Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Permutation Invariant Graph Generation via Score-Based Generative
Modeling [114.12935776726606]
We propose a permutation invariant approach to modeling graphs, using the recent framework of score-based generative modeling.
In particular, we design a permutation equivariant, multi-channel graph neural network to model the gradient of the data distribution at the input graph.
For graph generation, we find that our learning approach achieves better or comparable results to existing models on benchmark datasets.
arXiv Detail & Related papers (2020-03-02T03:06:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.