Caterpillar GNN: Replacing Message Passing with Efficient Aggregation
- URL: http://arxiv.org/abs/2506.06784v1
- Date: Sat, 07 Jun 2025 12:52:27 GMT
- Title: Caterpillar GNN: Replacing Message Passing with Efficient Aggregation
- Authors: Marek Černý,
- Abstract summary: We introduce an emphefficient aggregation mechanism, trading off some expressivity for stronger and more structured aggregation capabilities.<n>Our approach allows seamless scaling between classical message-passing and simpler methods based on colored or plain walks.<n>We demonstrate that the Caterpillar GNN achieves comparable predictive performance while significantly reducing the number of nodes in the hidden layers of the computational graph.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Message-passing graph neural networks (MPGNNs) dominate modern graph learning, typically prioritizing maximal expressive power. In contrast, we introduce an \emph{efficient aggregation} mechanism, deliberately trading off some expressivity for stronger and more structured aggregation capabilities. Our approach allows seamless scaling between classical message-passing and simpler methods based on colored or plain walks. We rigorously characterize the expressive power at each intermediate step using homomorphism counts from a hierarchy of generalized \emph{caterpillar graphs}. Based on this foundation, we propose the \emph{Caterpillar GNN}, whose robust graph-level aggregation enables it to successfully tackle synthetic graph-level task specifically designed to be challenging for classical MPGNNs. Moreover, we demonstrate that, on real-world datasets, the Caterpillar GNN achieves comparable predictive performance while significantly reducing the number of nodes in the hidden layers of the computational graph.
Related papers
- Lorentzian Graph Isomorphic Network [0.0]
Lorentzian Graph Isomorphic Network (LGIN) is a novel HGNN designed for enhanced discrimination within the Lorentzian model.<n>LGIN is the first to adapt principles of powerful, highly discriminative GNN architectures to a Riemannian manifold.
arXiv Detail & Related papers (2025-03-31T18:49:34Z) - Beyond Message Passing: Neural Graph Pattern Machine [50.78679002846741]
We introduce the Neural Graph Pattern Machine (GPM), a novel framework that bypasses message passing by learning directly from graph substructures.<n>GPM efficiently extracts, encodes, and prioritizes task-relevant graph patterns, offering greater expressivity and improved ability to capture long-range dependencies.
arXiv Detail & Related papers (2025-01-30T20:37:47Z) - Revisiting Graph Neural Networks on Graph-level Tasks: Comprehensive Experiments, Analysis, and Improvements [54.006506479865344]
We propose a unified evaluation framework for graph-level Graph Neural Networks (GNNs)<n>This framework provides a standardized setting to evaluate GNNs across diverse datasets.<n>We also propose a novel GNN model with enhanced expressivity and generalization capabilities.
arXiv Detail & Related papers (2025-01-01T08:48:53Z) - Scalable Message Passing Neural Networks: No Need for Attention in Large Graph Representation Learning [15.317501970096743]
We show that by integrating standard convolutional message passing into a Pre-Layer Normalization Transformer-style block instead of attention, we can produce high-performing deep message-passing-based Graph Neural Networks (GNNs)
Results are competitive with the state-of-the-art in large graph transductive learning, without requiring the otherwise computationally and memory-expensive attention mechanism.
arXiv Detail & Related papers (2024-10-29T17:18:43Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Graph Ladling: Shockingly Simple Parallel GNN Training without
Intermediate Communication [100.51884192970499]
GNNs are a powerful family of neural networks for learning over graphs.
scaling GNNs either by deepening or widening suffers from prevalent issues of unhealthy gradients, over-smoothening, information squashing.
We propose not to deepen or widen current GNNs, but instead present a data-centric perspective of model soups tailored for GNNs.
arXiv Detail & Related papers (2023-06-18T03:33:46Z) - Gradient Gating for Deep Multi-Rate Learning on Graphs [62.25886489571097]
We present Gradient Gating (G$2$), a novel framework for improving the performance of Graph Neural Networks (GNNs)
Our framework is based on gating the output of GNN layers with a mechanism for multi-rate flow of message passing information across nodes of the underlying graph.
arXiv Detail & Related papers (2022-10-02T13:19:48Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Hierarchical graph neural nets can capture long-range interactions [8.067880298298185]
We study hierarchical message passing models that leverage a multi-resolution representation of a given graph.
This facilitates learning of features that span large receptive fields without loss of local information.
We introduce Hierarchical Graph Net (HGNet), which for any two connected nodes guarantees existence of message-passing paths of at most logarithmic length.
arXiv Detail & Related papers (2021-07-15T16:24:22Z) - GNNAutoScale: Scalable and Expressive Graph Neural Networks via
Historical Embeddings [51.82434518719011]
GNNAutoScale (GAS) is a framework for scaling arbitrary message-passing GNNs to large graphs.
Gas prunes entire sub-trees of the computation graph by utilizing historical embeddings from prior training iterations.
Gas reaches state-of-the-art performance on large-scale graphs.
arXiv Detail & Related papers (2021-06-10T09:26:56Z) - Topological Graph Neural Networks [14.349152231293928]
We present TOGL, a novel layer that incorporates global topological information of a graph using persistent homology.
Augmenting GNNs with our layer leads to beneficial predictive performance, both on synthetic data sets and on real-world data.
arXiv Detail & Related papers (2021-02-15T20:27:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.