Accurate and Scalable Graph Neural Networks via Message Invariance
- URL: http://arxiv.org/abs/2502.19693v1
- Date: Thu, 27 Feb 2025 02:07:00 GMT
- Title: Accurate and Scalable Graph Neural Networks via Message Invariance
- Authors: Zhihao Shi, Jie Wang, Zhiwei Zhuang, Xize Liang, Bin Li, Feng Wu,
- Abstract summary: We propose an accurate and fast mini-batch approach for large graph transductive learning.<n>We show that TOP is significantly faster than existing mini-batch methods by order of magnitude on vast graphs.
- Score: 36.334113380584334
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Message passing-based graph neural networks (GNNs) have achieved great success in many real-world applications. For a sampled mini-batch of target nodes, the message passing process is divided into two parts: message passing between nodes within the batch (MP-IB) and message passing from nodes outside the batch to those within it (MP-OB). However, MP-OB recursively relies on higher-order out-of-batch neighbors, leading to an exponentially growing computational cost with respect to the number of layers. Due to the neighbor explosion, the whole message passing stores most nodes and edges on the GPU such that many GNNs are infeasible to large-scale graphs. To address this challenge, we propose an accurate and fast mini-batch approach for large graph transductive learning, namely topological compensation (TOP), which obtains the outputs of the whole message passing solely through MP-IB, without the costly MP-OB. The major pillar of TOP is a novel concept of message invariance, which defines message-invariant transformations to convert costly MP-OB into fast MP-IB. This ensures that the modified MP-IB has the same output as the whole message passing. Experiments demonstrate that TOP is significantly faster than existing mini-batch methods by order of magnitude on vast graphs (millions of nodes and billions of edges) with limited accuracy degradation.
Related papers
- Multigraph Message Passing with Bi-Directional Multi-Edge Aggregations [5.193718340934995]
MEGA-GNN is a unified framework for message passing on multigraphs.<n>We show that MEGA-GNN is not only permutation equivariant but also universal given a strict total ordering on the edges.<n>Experiments show that MEGA-GNN significantly outperforms state-of-the-art solutions by up to 13% on Anti-Money Laundering datasets.
arXiv Detail & Related papers (2024-11-29T20:15:18Z) - Partitioning Message Passing for Graph Fraud Detection [57.928658584067556]
Label imbalance and homophily-heterophily mixture are the fundamental problems encountered when applying Graph Neural Networks (GNNs) to Graph Fraud Detection (GFD) tasks.
Existing GNN-based GFD models are designed to augment graph structure to accommodate the inductive bias of GNNs towards homophily.
In our work, we argue that the key to applying GNNs for GFD is not to exclude but to em distinguish neighbors with different labels.
arXiv Detail & Related papers (2024-11-16T11:30:53Z) - Link Prediction with Untrained Message Passing Layers [0.716879432974126]
We study the use of various untrained message passing layers in graph neural networks.
We find that untrained message passing layers can lead to competitive and even superior performance compared to fully trained MPNNs.
arXiv Detail & Related papers (2024-06-24T14:46:34Z) - Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs [77.42221150848535]
We propose a novel message passing function called Multiset to Multiset GNN(M2M-GNN)
Our theoretical analyses and extensive experiments demonstrate that M2M-GNN effectively alleviates the aforementioned limitations of SMP, yielding superior performance in comparison.
arXiv Detail & Related papers (2024-05-31T07:39:22Z) - MADNet: Maximizing Addressee Deduction Expectation for Multi-Party
Conversation Generation [64.54727792762816]
We study the scarcity of addressee labels which is a common issue in multi-party conversations (MPCs)
We propose MADNet that maximizes addressee deduction expectation in heterogeneous graph neural networks for MPC generation.
Experimental results on two Ubuntu IRC channel benchmarks show that MADNet outperforms various baseline models on the task of MPC generation.
arXiv Detail & Related papers (2023-05-22T05:50:11Z) - Provably Convergent Subgraph-wise Sampling for Fast GNN Training [122.68566970275683]
We propose a novel subgraph-wise sampling method with a convergence guarantee, namely Local Message Compensation (LMC)
LMC retrieves the discarded messages in backward passes based on a message passing formulation of backward passes.
Experiments on large-scale benchmarks demonstrate that LMC is significantly faster than state-of-the-art subgraph-wise sampling methods.
arXiv Detail & Related papers (2023-03-17T05:16:49Z) - MACE: Higher Order Equivariant Message Passing Neural Networks for Fast
and Accurate Force Fields [4.812321790984494]
We introduce MACE, a new equivariant MPNN model that uses higher body order messages.
We show that using four-body messages reduces the required number of message passing iterations to just emphtwo, resulting in a fast and highly parallelizable model.
arXiv Detail & Related papers (2022-06-15T17:46:05Z) - Boosting Graph Neural Networks by Injecting Pooling in Message Passing [4.952681349410351]
We propose a new, adaptable, and powerful MP framework to prevent over-smoothing.
Our bilateral-MP estimates a pairwise modular gradient by utilizing the class information of nodes.
Experiments on five medium-size benchmark datasets indicate that the bilateral-MP improves performance by alleviating over-smoothing.
arXiv Detail & Related papers (2022-02-08T08:21:20Z) - Memory-based Message Passing: Decoupling the Message for Propogation
from Discrimination [6.7605701314795095]
Message passing is a fundamental procedure for graph neural networks (GNNs)
We propose a Memory-based Message Passing (MMP) method to decouple the message of each node into a self-embedding part for discrimination and a memory part for propagation.
Our MMP is a general skill that can work as an additional layer to help improve traditional GNNs performance.
arXiv Detail & Related papers (2022-02-01T14:15:32Z) - VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
Vector Quantization [70.8567058758375]
VQ-GNN is a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance.
Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
arXiv Detail & Related papers (2021-10-27T11:48:50Z) - GMLP: Building Scalable and Flexible Graph Neural Networks with
Feature-Message Passing [16.683813354137254]
Graph Multi-layer Perceptron (GMLP) separates the neural update from the message passing.
We conduct extensive evaluations on 11 benchmark datasets.
arXiv Detail & Related papers (2021-04-20T10:19:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.