GCN-MPPR: Enhancing the Propagation of Message Passing Neural Networks via Motif-Based Personalized PageRank
- URL: http://arxiv.org/abs/2602.07903v1
- Date: Sun, 08 Feb 2026 10:49:49 GMT
- Title: GCN-MPPR: Enhancing the Propagation of Message Passing Neural Networks via Motif-Based Personalized PageRank
- Authors: Mingcan Wang, Junchang Xin, Zhongming Yao, Kaifu Long, Zhiqiong Wang,
- Abstract summary: This paper presents a novel variant of PageRank named motif-based personalized PageRank (MPPR)<n>MPPR is proposed to measure the influence of one node to another on the basis of considering higher-order motif relationships.<n>The experimental results show that the proposed method outperforms almost all of the baselines on accuracy, stability, and time consumption.
- Score: 3.3894571022475066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The algorithms based on message passing neural networks (MPNNs) on graphs have recently achieved great success for various graph applications. However, studies find that these methods always propagate the information to very limited neighborhoods with shallow depth, particularly due to over-smoothing. That means most of the existing MPNNs fail to be so `deep'. Although some previous work tended to handle this challenge via optimization- or structure-level remedies, the overall performance of GCNs still suffers from limited accuracy, poor stability, and unaffordable computational cost. Moreover, neglect of higher-order relationships during the propagation of MPNNs has further limited the performance of them. To overcome these challenges, a novel variant of PageRank named motif-based personalized PageRank (MPPR) is proposed to measure the influence of one node to another on the basis of considering higher-order motif relationships. Secondly, the MPPR is utilized to the message passing process of GCNs, thereby guiding the message passing process at a relatively `high' level. The experimental results show that the proposed method outperforms almost all of the baselines on accuracy, stability, and time consumption. Additionally, the proposed method can be considered as a component that can underpin almost all GCN tasks, with DGCRL being demonstrated in the experiment. The anonymous code repository is available at: https://anonymous.4open.science/r/GCN-MPPR-AFD6/.
Related papers
- Statistical physics analysis of graph neural networks: Approaching optimality in the contextual stochastic block model [0.0]
Graph neural networks (GNNs) are designed to process data associated with graphs.<n>GNNs can encounter difficulties in gathering information from nodes far apart by iterated aggregation steps.<n>We show how the architecture of the GCN has to scale with the depth to avoid oversmoothing.
arXiv Detail & Related papers (2025-03-03T09:55:10Z) - FIT-GNN: Faster Inference Time for GNNs that 'FIT' in Memory Using Coarsening [1.1345413192078595]
This paper presents a novel approach to improve the scalability of Graph Neural Networks (GNNs) by reducing computational burden during the inference phase using graph coarsening.<n>Our study extends the application of graph coarsening for graph-level tasks, including graph classification and graph regression.<n>Results show that the proposed method achieves orders of magnitude improvements in single-node inference time compared to traditional approaches.
arXiv Detail & Related papers (2024-10-19T06:27:24Z) - Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs [77.42221150848535]
We propose a novel message passing function called Multiset to Multiset GNN(M2M-GNN)
Our theoretical analyses and extensive experiments demonstrate that M2M-GNN effectively alleviates the aforementioned limitations of SMP, yielding superior performance in comparison.
arXiv Detail & Related papers (2024-05-31T07:39:22Z) - Forward Learning of Graph Neural Networks [17.79590285482424]
Backpropagation (BP) is the de facto standard for training deep neural networks (NNs)
BP imposes several constraints, which are not only biologically implausible, but also limit the scalability, parallelism, and flexibility in learning NNs.
We propose ForwardGNN, which avoids the constraints imposed by BP via an effective layer-wise local forward training.
arXiv Detail & Related papers (2024-03-16T19:40:35Z) - Revisiting Heterophily For Graph Neural Networks [42.41238892727136]
Graph Neural Networks (GNNs) extend basic Neural Networks (NNs) by using graph structures based on the relational inductive bias (homophily assumption)
Recent work has identified a non-trivial set of datasets where their performance compared to NNs is not satisfactory.
arXiv Detail & Related papers (2022-10-14T08:00:26Z) - ResNorm: Tackling Long-tailed Degree Distribution Issue in Graph Neural
Networks via Normalization [80.90206641975375]
This paper focuses on improving the performance of GNNs via normalization.
By studying the long-tailed distribution of node degrees in the graph, we propose a novel normalization method for GNNs.
The $scale$ operation of ResNorm reshapes the node-wise standard deviation (NStd) distribution so as to improve the accuracy of tail nodes.
arXiv Detail & Related papers (2022-06-16T13:49:09Z) - Boosting Graph Neural Networks by Injecting Pooling in Message Passing [4.952681349410351]
We propose a new, adaptable, and powerful MP framework to prevent over-smoothing.
Our bilateral-MP estimates a pairwise modular gradient by utilizing the class information of nodes.
Experiments on five medium-size benchmark datasets indicate that the bilateral-MP improves performance by alleviating over-smoothing.
arXiv Detail & Related papers (2022-02-08T08:21:20Z) - Contrastive Adaptive Propagation Graph Neural Networks for Efficient
Graph Learning [65.08818785032719]
Graph Networks (GNNs) have achieved great success in processing graph data by extracting and propagating structure-aware features.
Recently the field has advanced from local propagation schemes that focus on local neighbors towards extended propagation schemes that can directly deal with extended neighbors consisting of both local and high-order neighbors.
Despite the impressive performance, existing approaches are still insufficient to build an efficient and learnable extended propagation scheme that can adaptively adjust the influence of local and high-order neighbors.
arXiv Detail & Related papers (2021-12-02T10:35:33Z) - Tackling Over-Smoothing for General Graph Convolutional Networks [88.71154017107257]
We study how general GCNs act with the increase in depth, including generic GCN, GCN with bias, ResGCN, and APPNP.
We propose DropEdge to alleviate over-smoothing by randomly removing a certain number of edges at each training epoch.
arXiv Detail & Related papers (2020-08-22T16:14:01Z) - DeeperGCN: All You Need to Train Deeper GCNs [66.64739331859226]
Graph Convolutional Networks (GCNs) have been drawing significant attention with the power of representation learning on graphs.
Unlike Convolutional Neural Networks (CNNs), which are able to take advantage of stacking very deep layers, GCNs suffer from vanishing gradient, over-smoothing and over-fitting issues when going deeper.
This paper proposes DeeperGCN that is capable of successfully and reliably training very deep GCNs.
arXiv Detail & Related papers (2020-06-13T23:00:22Z) - Understanding and Resolving Performance Degradation in Graph
Convolutional Networks [105.14867349802898]
Graph Convolutional Network (GCN) stacks several layers and in each layer performs a PROPagation operation (PROP) and a TRANsformation operation (TRAN) for learning node representations over graph-structured data.
GCNs tend to suffer performance drop when the model gets deep.
We study performance degradation of GCNs by experimentally examining how stacking only TRANs or PROPs works.
arXiv Detail & Related papers (2020-06-12T12:12:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.