Transforming PageRank into an Infinite-Depth Graph Neural Network
- URL: http://arxiv.org/abs/2207.00684v1
- Date: Fri, 1 Jul 2022 23:17:40 GMT
- Title: Transforming PageRank into an Infinite-Depth Graph Neural Network
- Authors: Andreas Roth, Thomas Liebig
- Abstract summary: Popular graph neural networks are shallow models, despite the success of very deep architectures in other application domains of deep learning.
We build on the close connection between GNNs and PageRank, for which personalized PageRank introduces the consideration of a personalization vector.
Adopting this idea, we propose the Personalized PageRank Graph Neural Network (PPRGNN), which extends the graph convolutional network to an infinite-depth model.
- Score: 1.0965065178451106
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Popular graph neural networks are shallow models, despite the success of very
deep architectures in other application domains of deep learning. This reduces
the modeling capacity and leaves models unable to capture long-range
relationships. The primary reason for the shallow design results from
over-smoothing, which leads node states to become more similar with increased
depth. We build on the close connection between GNNs and PageRank, for which
personalized PageRank introduces the consideration of a personalization vector.
Adopting this idea, we propose the Personalized PageRank Graph Neural Network
(PPRGNN), which extends the graph convolutional network to an infinite-depth
model that has a chance to reset the neighbor aggregation back to the initial
state in each iteration. We introduce a nicely interpretable tweak to the
chance of resetting and prove the convergence of our approach to a unique
solution without placing any constraints, even when taking infinitely many
neighbor aggregations. As in personalized PageRank, our result does not suffer
from over-smoothing. While doing so, time complexity remains linear while we
keep memory complexity constant, independently of the depth of the network,
making it scale well to large graphs. We empirically show the effectiveness of
our approach for various node and graph classification tasks. PPRGNN
outperforms comparable methods in almost all cases.
Related papers
- Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - Temporal Aggregation and Propagation Graph Neural Networks for Dynamic
Representation [67.26422477327179]
Temporal graphs exhibit dynamic interactions between nodes over continuous time.
We propose a novel method of temporal graph convolution with the whole neighborhood.
Our proposed TAP-GNN outperforms existing temporal graph methods by a large margin in terms of both predictive performance and online inference latency.
arXiv Detail & Related papers (2023-04-15T08:17:18Z) - AGNN: Alternating Graph-Regularized Neural Networks to Alleviate
Over-Smoothing [29.618952407794776]
We propose an Alternating Graph-regularized Neural Network (AGNN) composed of Graph Convolutional Layer (GCL) and Graph Embedding Layer (GEL)
GEL is derived from the graph-regularized optimization containing Laplacian embedding term, which can alleviate the over-smoothing problem.
AGNN is evaluated via a large number of experiments including performance comparison with some multi-layer or multi-order graph neural networks.
arXiv Detail & Related papers (2023-04-14T09:20:03Z) - Search to Capture Long-range Dependency with Stacking GNNs for Graph
Classification [41.84399177525008]
shallow GNNs are more common due to the well-known over-smoothing problem facing deeper GNNs.
We propose a novel approach with the help of neural architecture search (NAS), which is dubbed LRGNN (Long-Range Graph Neural Networks)
arXiv Detail & Related papers (2023-02-17T03:40:17Z) - Improving Graph Neural Networks at Scale: Combining Approximate PageRank
and CoreRank [20.948992161528466]
We propose a scalable solution to propagate information on Graph Neural Networks (GNNs)
The CorePPR model uses a learnable convex combination of the approximate personalised PageRank and the CoreRank to diffuse multi-hop neighbourhood information in GNNs.
We demonstrate that CorePPR outperforms PPRGo on large graphs where selecting the most influential nodes is particularly relevant for scalability.
arXiv Detail & Related papers (2022-11-08T13:51:49Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Personalized PageRank Graph Attention Networks [0.0]
A graph neural network (GNN) is a framework to learn from graph-structured data.
GNNs typically only use the information of a very limited neighborhood for each node to avoid over-smoothing.
In this work, we incorporate the limit distribution of Personalized PageRank (PPR) into graph attention networks (GATs) to reflect the larger neighbor information without introducing over-smoothing.
arXiv Detail & Related papers (2022-05-27T22:36:47Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - EIGNN: Efficient Infinite-Depth Graph Neural Networks [51.97361378423152]
Graph neural networks (GNNs) are widely used for modelling graph-structured data in numerous applications.
Motivated by this limitation, we propose a GNN model with infinite depth, which we call Efficient Infinite-Depth Graph Neural Networks (EIGNN)
We show that EIGNN has a better ability to capture long-range dependencies than recent baselines, and consistently achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-02-22T08:16:58Z) - SMGRL: Scalable Multi-resolution Graph Representation Learning [1.878741798127168]
Graph convolutional networks (GCNs) allow us to learn topologically-aware node embeddings.
They are unable to capture long-range dependencies between nodes without adding additional layers.
We propose a Scalable Multi-resolution Graph Representation Learning framework that enables us to learn multi-resolution node embeddings efficiently.
arXiv Detail & Related papers (2022-01-29T21:48:48Z) - Graph Ordering: Towards the Optimal by Learning [69.72656588714155]
Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
arXiv Detail & Related papers (2020-01-18T09:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.