Connectivity-Guided Sparsification of 2-FWL GNNs: Preserving Full Expressivity with Improved Efficiency
- URL: http://arxiv.org/abs/2511.12838v1
- Date: Sun, 16 Nov 2025 23:46:54 GMT
- Title: Connectivity-Guided Sparsification of 2-FWL GNNs: Preserving Full Expressivity with Improved Efficiency
- Authors: Rongqin Chen, Fan Mo, Pak Lon Ip, Shenghui Zhang, Dan Wu, Ye Li, Leong Hou U,
- Abstract summary: We propose textbfCo-Sparsify, a connectivity-aware sparsification framework.<n>Our key insight is that 3-node interactions are expressively necessary only within emphbiconnected components.<n>We prove that Co-Sparsify is as expressive as the 2-FWL test.
- Score: 15.330129666665927
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Higher-order Graph Neural Networks (HOGNNs) based on the 2-FWL test achieve superior expressivity by modeling 2- and 3-node interactions, but at $\mathcal{O}(n^3)$ computational cost. However, this computational burden is typically mitigated by existing efficiency methods at the cost of reduced expressivity. We propose \textbf{Co-Sparsify}, a connectivity-aware sparsification framework that eliminates \emph{provably redundant} computations while preserving full 2-FWL expressive power. Our key insight is that 3-node interactions are expressively necessary only within \emph{biconnected components} -- maximal subgraphs where every pair of nodes lies on a cycle. Outside these components, structural relationships can be fully captured via 2-node message passing or global readout, rendering higher-order modeling unnecessary. Co-Sparsify restricts 2-node message passing to connected components and 3-node interactions to biconnected ones, removing computation without approximation or sampling. We prove that Co-Sparsified GNNs are as expressive as the 2-FWL test. Empirically, on PPGN, Co-Sparsify matches or exceeds accuracy on synthetic substructure counting tasks and achieves state-of-the-art performance on real-world benchmarks (ZINC, QM9). This study demonstrates that high expressivity and scalability are not mutually exclusive: principled, topology-guided sparsification enables powerful, efficient GNNs with theoretical guarantees.
Related papers
- TorchLean: Formalizing Neural Networks in Lean [71.68907600404513]
We introduce TorchLean, a framework that treats learned models as first-class mathematical objects with a single, precise semantics shared by execution and verification.<n>We validate TorchLean end-to-end on certified robustness, physics-informed residual bounds for PINNs, and Lyapunov-style neural controller verification.
arXiv Detail & Related papers (2026-02-26T05:11:44Z) - Universally Invariant Learning in Equivariant GNNs [47.74131538835446]
Equivariant Graph Neural Networks (GNNs) have demonstrated significant success across various applications.<n>We present a theoretically grounded framework for constructing complete equivariant GNNs.<n>Our results demonstrate superior completeness and excellent performance with only a few layers.
arXiv Detail & Related papers (2025-10-15T05:50:16Z) - Constructive Lyapunov Functions via Topology-Preserving Neural Networks [0.0]
ONN achieves order-optimal performance on convergence rate, edge efficiency, and computational complexity.<n> Empirical validation on 3M-node semantic networks demonstrates 99.75% improvement over baseline methods.<n>ORTSF integration into transformers achieves 14.7% perplexity reduction and 2.3 faster convergence.
arXiv Detail & Related papers (2025-10-10T17:46:52Z) - DUPLEX: Dual GAT for Complex Embedding of Directed Graphs [17.84142263305169]
Current directed graph embedding methods build upon undirected techniques but often inadequately capture directed edge information.
We propose DUPlex, an inductive framework for complex embeddings of directed graphs.
DUPlex outperforms state-of-the-art models, especially for nodes with sparse connectivity.
arXiv Detail & Related papers (2024-06-08T07:48:16Z) - EntailE: Introducing Textual Entailment in Commonsense Knowledge Graph
Completion [54.12709176438264]
Commonsense knowledge graphs (CSKGs) utilize free-form text to represent named entities, short phrases, and events as their nodes.
Current methods leverage semantic similarities to increase the graph density, but the semantic plausibility of the nodes and their relations are under-explored.
We propose to adopt textual entailment to find implicit entailment relations between CSKG nodes, to effectively densify the subgraph connecting nodes within the same conceptual class.
arXiv Detail & Related papers (2024-02-15T02:27:23Z) - Efficient Link Prediction via GNN Layers Induced by Negative Sampling [86.87385758192566]
Graph neural networks (GNNs) for link prediction can loosely be divided into two broad categories.<n>We propose a novel GNN architecture whereby the emphforward pass explicitly depends on emphboth positive (as is typical) and negative (unique to our approach) edges.<n>This is achieved by recasting the embeddings themselves as minimizers of a forward-pass-specific energy function that favors separation of positive and negative samples.
arXiv Detail & Related papers (2023-10-14T07:02:54Z) - Binary Graph Convolutional Network with Capacity Exploration [58.99478502486377]
We propose a Binary Graph Convolutional Network (Bi-GCN), which binarizes both the network parameters and input node attributes.
Our Bi-GCN can reduce the memory consumption by an average of 31x for both the network parameters and input data, and accelerate the inference speed by an average of 51x.
arXiv Detail & Related papers (2022-10-24T12:05:17Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - The Curious Case of Convex Neural Networks [12.56278477726461]
We show that the convexity constraints can be enforced on both fully connected and convolutional layers.
We draw three valuable insights: (a) Input Output Convex Neural Networks (IOC-NNs) self regularize and reduce the problem of overfitting; (b) Although heavily constrained, they outperform the base multi layer perceptrons and achieve similar performance as compared to base convolutional architectures.
arXiv Detail & Related papers (2020-06-09T08:16:38Z) - Interpretable and Efficient Heterogeneous Graph Convolutional Network [27.316334213279973]
We propose an interpretable and efficient Heterogeneous Graph Convolutional Network (ie-HGCN) to learn the representations of objects in Heterogeneous Information Network (HINs)
ie-HGCN can automatically extract useful meta-paths for each object from all possible meta-paths within a length limit.
It can also reduce the computational cost by avoiding intermediate HIN transformation and neighborhood attention.
arXiv Detail & Related papers (2020-05-27T06:06:00Z) - Bilinear Graph Neural Network with Neighbor Interactions [106.80781016591577]
Graph Neural Network (GNN) is a powerful model to learn representations and make predictions on graph data.
We propose a new graph convolution operator, which augments the weighted sum with pairwise interactions of the representations of neighbor nodes.
We term this framework as Bilinear Graph Neural Network (BGNN), which improves GNN representation ability with bilinear interactions between neighbor nodes.
arXiv Detail & Related papers (2020-02-10T06:43:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.