Message Passing on the Edge: Towards Scalable and Expressive GNNs
- URL: http://arxiv.org/abs/2510.13615v1
- Date: Wed, 15 Oct 2025 14:45:17 GMT
- Title: Message Passing on the Edge: Towards Scalable and Expressive GNNs
- Authors: Pablo Barceló, Fabian Jogl, Alexander Kozachinskiy, Matthias Lanzinger, Stefan Neumann, Cristóbal Rojas,
- Abstract summary: We propose EB-1WL, an edge-based color-refinement test, and a corresponding GNN architecture, EB-GNN.<n>Our architecture is inspired by a classic triangle counting algorithm by Chiba and Nishizeki, and explicitly uses triangles during message passing.
- Score: 53.11031839861869
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose EB-1WL, an edge-based color-refinement test, and a corresponding GNN architecture, EB-GNN. Our architecture is inspired by a classic triangle counting algorithm by Chiba and Nishizeki, and explicitly uses triangles during message passing. We achieve the following results: (1)~EB-1WL is significantly more expressive than 1-WL. Further, we provide a complete logical characterization of EB-1WL based on first-order logic, and matching distinguishability results based on homomorphism counting. (2)~In an important distinction from previous proposals for more expressive GNN architectures, EB-1WL and EB-GNN require near-linear time and memory on practical graph learning tasks. (3)~Empirically, we show that EB-GNN is a highly-efficient general-purpose architecture: It substantially outperforms simple MPNNs, and remains competitive with task-specialized GNNs while being significantly more computationally efficient.
Related papers
- Branching Strategies Based on Subgraph GNNs: A Study on Theoretical Promise versus Practical Reality [16.01902631233758]
Graph Neural Networks (GNNs) have emerged as a promising approach for learning to branch'' in Mixed-Integer Linear Programming (MILP)<n>In this work, we investigate Subgraph GNNs as a theoretical middle ground.<n>Our results indicate that for MILP branching, the computational cost of expressive GNNs currently outweighs their gains in decision quality.
arXiv Detail & Related papers (2025-12-10T06:29:25Z) - Demystifying MPNNs: Message Passing as Merely Efficient Matrix Multiplication [4.002604752467421]
Graph Neural Networks (GNNs) have achieved remarkable success, their design largely relies on empirical intuition rather than theoretical understanding.<n>We present a comprehensive analysis of GNN behavior through three fundamental aspects.<n>We demonstrate that gradient-related issues, rather than just over-smoothing, can significantly impact performance in sparse graphs.<n>We also analyze how different normalization schemes affect model performance and how GNNs make predictions with uniform node features.
arXiv Detail & Related papers (2025-01-31T19:48:03Z) - Rethinking the Expressive Power of GNNs via Graph Biconnectivity [45.4674360883544]
We introduce a novel class of expressivity metrics via graph biconnectivity and highlight their importance in both theory and practice.
We introduce a principled and more efficient approach, called the Generalized Distance Weisfeiler-Lehman (GD-WL)
Practically, we show GD-WL can be implemented by a Transformer-like architecture that preserves and enjoys full parallelizability.
arXiv Detail & Related papers (2023-01-23T15:58:59Z) - Empowering GNNs via Edge-Aware Weisfeiler-Leman Algorithm [79.68451803954751]
Message passing graph neural networks (GNNs) are known to have their expressiveness upper-bounded by 1-dimensional Weisfeiler-Leman (1-WL) algorithm.
We propose a general and provably powerful GNN framework that preserves the scalability of the message passing scheme.
arXiv Detail & Related papers (2022-06-04T21:37:59Z) - Representation Power of Graph Neural Networks: Improved Expressivity via
Algebraic Analysis [124.97061497512804]
We show that standard Graph Neural Networks (GNNs) produce more discriminative representations than the Weisfeiler-Lehman (WL) algorithm.
We also show that simple convolutional architectures with white inputs, produce equivariant features that count the closed paths in the graph.
arXiv Detail & Related papers (2022-05-19T18:40:25Z) - From Stars to Subgraphs: Uplifting Any GNN with Local Structure
Awareness [23.279464786779787]
We introduce a general framework to uplift any MPNN to be more expressive.
Our framework is strictly more powerful than 1&2-WL, and is not less powerful than 3-WL.
Our method sets new state-of-the-art performance by large margins for several well-known graph ML tasks.
arXiv Detail & Related papers (2021-10-07T19:08:08Z) - Optimization of Graph Neural Networks: Implicit Acceleration by Skip
Connections and More Depth [57.10183643449905]
Graph Neural Networks (GNNs) have been studied from the lens of expressive power and generalization.
We study the dynamics of GNNs by studying deep skip optimization.
Our results provide first theoretical support for the success of GNNs.
arXiv Detail & Related papers (2021-05-10T17:59:01Z) - Identity-aware Graph Neural Networks [63.6952975763946]
We develop a class of message passing Graph Neural Networks (ID-GNNs) with greater expressive power than the 1-WL test.
ID-GNN extends existing GNN architectures by inductively considering nodes' identities during message passing.
We show that transforming existing GNNs to ID-GNNs yields on average 40% accuracy improvement on challenging node, edge, and graph property prediction tasks.
arXiv Detail & Related papers (2021-01-25T18:59:01Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs [95.63153473559865]
Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Most existing GNN models in practice are shallow and essentially feature-centric.
We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well.
We propose Eigen-GNN, a plug-in module to boost GNNs ability in preserving graph structures.
arXiv Detail & Related papers (2020-06-08T02:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.