Revisiting Random Walks for Learning on Graphs
- URL: http://arxiv.org/abs/2407.01214v1
- Date: Mon, 1 Jul 2024 11:59:59 GMT
- Title: Revisiting Random Walks for Learning on Graphs
- Authors: Jinwoo Kim, Olga Zaghen, Ayhan Suleymanzade, Youngmin Ryou, Seunghoon Hong,
- Abstract summary: We revisit a simple idea for machine learning on graphs, where a random walk on a graph produces a machine-readable record.
We refer to these machines as random walk neural networks.
We show that we can design them to be isomorphism invariant while capable of universal approximation of graph functions in probability.
- Score: 15.64437981617624
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We revisit a simple idea for machine learning on graphs, where a random walk on a graph produces a machine-readable record, and this record is processed by a deep neural network to directly make vertex-level or graph-level predictions. We refer to these stochastic machines as random walk neural networks, and show that we can design them to be isomorphism invariant while capable of universal approximation of graph functions in probability. A useful finding is that almost any kind of record of random walk guarantees probabilistic invariance as long as the vertices are anonymized. This enables us to record random walks in plain text and adopt a language model to read these text records to solve graph tasks. We further establish a parallelism to message passing neural networks using tools from Markov chain theory, and show that over-smoothing in message passing is alleviated by construction in random walk neural networks, while over-squashing manifests as probabilistic under-reaching. We show that random walk neural networks based on pre-trained language models can solve several hard problems on graphs, such as separating strongly regular graphs where the 3-WL test fails, counting substructures, and transductive classification on arXiv citation network without training. Code is available at https://github.com/jw9730/random-walk.
Related papers
- Sum-Product-Set Networks: Deep Tractable Models for Tree-Structured Graphs [0.0]
We propose sum-product-set networks, an extension of probabilistic circuits from unstructured data to tree-structured graph data.
We demonstrate that our tractable model performs comparably to various intractable models based on neural networks.
arXiv Detail & Related papers (2024-08-14T09:13:27Z) - GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - Finding Hamiltonian cycles with graph neural networks [0.0]
We train a small message-passing graph neural network to predict Hamiltonian cycles on ErdHos-Rnyi'e random graphs.
The model generalizes well to larger graph sizes and retains reasonable performance even on graphs eight times the original size.
arXiv Detail & Related papers (2023-06-10T21:18:31Z) - Machine learning of percolation models using graph convolutional neural
networks [1.0499611180329804]
Prediction of percolation thresholds with machine learning methods remains challenging.
We build a powerful graph convolutional neural network to study the percolation in both supervised and unsupervised ways.
arXiv Detail & Related papers (2022-07-07T15:17:40Z) - A generative neural network model for random dot product graphs [1.1421942894219896]
GraphMoE is a novel approach to learning generative models for random graphs.
The neural network is trained to match the distribution of a class of random graphs by way of a moment estimator.
arXiv Detail & Related papers (2022-04-15T19:59:22Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - Very Deep Graph Neural Networks Via Noise Regularisation [57.450532911995516]
Graph Neural Networks (GNNs) perform learned message passing over an input graph.
We train a deep GNN with up to 100 message passing steps and achieve several state-of-the-art results.
arXiv Detail & Related papers (2021-06-15T08:50:10Z) - Bosonic Random Walk Networks for Graph Learning [32.24009574184356]
We explore applications of multi-particle quantum walks on diffusing information across graphs.
Our model is based on learning the operators that govern the dynamics of quantum random walkers on graphs.
We demonstrate the effectiveness of our method on classification and regression tasks.
arXiv Detail & Related papers (2020-12-31T21:40:40Z) - Analyzing Neural Networks Based on Random Graphs [77.34726150561087]
We perform a massive evaluation of neural networks with architectures corresponding to random graphs of various types.
We find that none of the classical numerical graph invariants by itself allows to single out the best networks.
We also find that networks with primarily short-range connections perform better than networks which allow for many long-range connections.
arXiv Detail & Related papers (2020-02-19T11:04:49Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.