RaWaNet: Enriching Graph Neural Network Input via Random Walks on Graphs
- URL: http://arxiv.org/abs/2109.07555v1
- Date: Wed, 15 Sep 2021 20:04:01 GMT
- Title: RaWaNet: Enriching Graph Neural Network Input via Random Walks on Graphs
- Authors: Anahita Iravanizad, Edgar Ivan Sanchez Medina, Martin Stoll
- Abstract summary: Graph neural networks (GNNs) have gained increasing popularity and have shown very promising results for data that are represented by graphs.
We propose a random walk data processing of the graphs based on three selected lengths. Namely, (regular) walks of length 1 and 2, and a fractional walk of length $gamma in (0,1)$, in order to capture the different local and global dynamics on the graphs.
We test our method on various molecular datasets by passing the processed node features to the network in order to perform several classification and regression tasks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, graph neural networks (GNNs) have gained increasing
popularity and have shown very promising results for data that are represented
by graphs. The majority of GNN architectures are designed based on developing
new convolutional and/or pooling layers that better extract the hidden and
deeper representations of the graphs to be used for different prediction tasks.
The inputs to these layers are mainly the three default descriptors of a graph,
node features $(X)$, adjacency matrix $(A)$, and edge features $(W)$ (if
available). To provide a more enriched input to the network, we propose a
random walk data processing of the graphs based on three selected lengths.
Namely, (regular) walks of length 1 and 2, and a fractional walk of length
$\gamma \in (0,1)$, in order to capture the different local and global dynamics
on the graphs. We also calculate the stationary distribution of each random
walk, which is then used as a scaling factor for the initial node features
($X$). This way, for each graph, the network receives multiple adjacency
matrices along with their individual weighting for the node features. We test
our method on various molecular datasets by passing the processed node features
to the network in order to perform several classification and regression tasks.
Interestingly, our method, not using edge features which are heavily exploited
in molecular graph learning, let a shallow network outperform well known deep
GNNs.
Related papers
- CliquePH: Higher-Order Information for Graph Neural Networks through Persistent Homology on Clique Graphs [15.044471983688249]
We introduce a novel method that extracts information about higher-order structures in the graph.
Our method can lead to up to $31%$ improvements in test accuracy.
arXiv Detail & Related papers (2024-09-12T16:56:26Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - Path Integral Based Convolution and Pooling for Heterogeneous Graph
Neural Networks [2.5889737226898437]
Graph neural networks (GNN) extends deep learning to graph-structure dataset.
Similar to Convolutional Neural Networks (CNN) using on image prediction, convolutional and pooling layers are the foundation to success for GNN on graph prediction tasks.
arXiv Detail & Related papers (2023-02-26T20:05:23Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Multi-Granularity Graph Pooling for Video-based Person Re-Identification [14.943835935921296]
graph neural networks (GNNs) are introduced to aggregate temporal and spatial features of video samples.
Existing graph-based models, like STGCN, perform the textitmean/textitmax pooling on node features to obtain the graph representation.
We propose the graph pooling network (GPNet) to learn the multi-granularity graph representation for the video retrieval.
arXiv Detail & Related papers (2022-09-23T13:26:05Z) - Neighbor2Seq: Deep Learning on Massive Graphs by Transforming Neighbors
to Sequences [55.329402218608365]
We propose the Neighbor2Seq to transform the hierarchical neighborhood of each node into a sequence.
We evaluate our method on a massive graph with more than 111 million nodes and 1.6 billion edges.
Results show that our proposed method is scalable to massive graphs and achieves superior performance across massive and medium-scale graphs.
arXiv Detail & Related papers (2022-02-07T16:38:36Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Scaling Graph Neural Networks with Approximate PageRank [64.92311737049054]
We present the PPRGo model which utilizes an efficient approximation of information diffusion in GNNs.
In addition to being faster, PPRGo is inherently scalable, and can be trivially parallelized for large datasets like those found in industry settings.
We show that training PPRGo and predicting labels for all nodes in this graph takes under 2 minutes on a single machine, far outpacing other baselines on the same graph.
arXiv Detail & Related papers (2020-07-03T09:30:07Z) - Hierarchical Representation Learning in Graph Neural Networks with Node Decimation Pooling [31.812988573924674]
In graph neural networks (GNNs), pooling operators compute local summaries of input graphs to capture their global properties.
We propose the Node Decimation Pooling (NDP), a pooling operator for GNNs that generates coarser graphs while preserving the overall graph topology.
NDP is more efficient compared to state-of-the-art graph pooling operators while reaching, at the same time, competitive performance on a significant variety of graph classification tasks.
arXiv Detail & Related papers (2019-10-24T21:42:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.