Homomorphism Expressivity of Spectral Invariant Graph Neural Networks
- URL: http://arxiv.org/abs/2503.00485v1
- Date: Sat, 01 Mar 2025 13:23:49 GMT
- Title: Homomorphism Expressivity of Spectral Invariant Graph Neural Networks
- Authors: Jingchu Gai, Yiheng Du, Bohang Zhang, Haggai Maron, Liwei Wang,
- Abstract summary: We prove that spectral invariant GNNs can homomorphism-count exactly a class of specific tree-like graphs which we refer to as parallel trees.<n>Our results significantly extend Arvind et al. (2024) and settle their open questions.<n>We generalize our analysis to higher-order GNNs and answer an open question raised by Zhang et al. (2024)
- Score: 25.040557677497745
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph spectra are an important class of structural features on graphs that have shown promising results in enhancing Graph Neural Networks (GNNs). Despite their widespread practical use, the theoretical understanding of the power of spectral invariants -- particularly their contribution to GNNs -- remains incomplete. In this paper, we address this fundamental question through the lens of homomorphism expressivity, providing a comprehensive and quantitative analysis of the expressive power of spectral invariants. Specifically, we prove that spectral invariant GNNs can homomorphism-count exactly a class of specific tree-like graphs which we refer to as parallel trees. We highlight the significance of this result in various contexts, including establishing a quantitative expressiveness hierarchy across different architectural variants, offering insights into the impact of GNN depth, and understanding the subgraph counting capabilities of spectral invariant GNNs. In particular, our results significantly extend Arvind et al. (2024) and settle their open questions. Finally, we generalize our analysis to higher-order GNNs and answer an open question raised by Zhang et al. (2024).
Related papers
- Spectral Graph Neural Networks are Incomplete on Graphs with a Simple Spectrum [2.916558661202724]
Spectral features are widely incorporated within Graph Neural Networks (GNNs) to improve their expressive power.<n>We propose a method to provably improve SGNNs' expressivity on simple spectrum graphs.
arXiv Detail & Related papers (2025-06-05T19:26:29Z) - Generalization of Geometric Graph Neural Networks [84.01980526069075]
We study the generalization capabilities of geometric graph neural networks (GNNs)
We prove a generalization gap between the optimal empirical risk and the optimal statistical risk of this GNN.
The most important observation is that the generalization capability can be realized with one large graph instead of being limited to the size of the graph as in previous results.
arXiv Detail & Related papers (2024-09-08T18:55:57Z) - A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
We take a manifold perspective to establish the statistical generalization theory of GNNs on graphs sampled from a manifold in the spectral domain.
We prove that the generalization bounds of GNNs decrease linearly with the size of the graphs in the logarithmic scale, and increase linearly with the spectral continuity constants of the filter functions.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - On the Expressive Power of Spectral Invariant Graph Neural Networks [28.557550571187253]
We introduce a unified message-passing framework for designing spectral invariant GNNs, called Eigenspace Projection GNN (EPNN)
We show that EPNN essentially unifies all prior spectral invariant architectures, in that they are either strictly less expressive or equivalent to EPNN.
We discuss whether using spectral features can gain additional expressiveness when combined with more expressive GNNs.
arXiv Detail & Related papers (2024-06-06T17:59:41Z) - Generalization of Graph Neural Networks through the Lens of Homomorphism [7.223313563198697]
We propose to study the generalization of Graph Neural Networks (GNNs) through a novel perspective - analyzing the entropy of graph homomorphism.
By linking graph homomorphism with information-theoretic measures, we derive generalization bounds for both graph and node classifications.
These bounds are capable of capturing subtleties inherent in various graph structures, including but not limited to paths, cycles and cliques.
arXiv Detail & Related papers (2024-03-10T03:51:59Z) - Beyond Weisfeiler-Lehman: A Quantitative Framework for GNN
Expressiveness [35.409017863665575]
Homomorphism expressivity measures the ability of GNN models to count graphs under homomorphism.
By examining four classes of prominent GNNs as case studies, we derive simple, unified, and elegant descriptions of their homomorphism expressivity.
Our results provide novel insights into a series of previous work, unify the landscape of different subareas in the community, and settle several open questions.
arXiv Detail & Related papers (2024-01-16T17:23:23Z) - On the Expressive Power of Graph Neural Networks [0.0]
Graph Neural Networks (GNNs) can solve a diverse set of tasks in fields including social science, chemistry, and medicine.
By extending deep learning to graph-structured data, GNNs can solve a diverse set of tasks in fields including social science, chemistry, and medicine.
arXiv Detail & Related papers (2024-01-03T08:54:56Z) - Demystifying Structural Disparity in Graph Neural Networks: Can One Size
Fit All? [61.35457647107439]
Most real-world homophilic and heterophilic graphs are comprised of a mixture of nodes in both homophilic and heterophilic structural patterns.
We provide evidence that Graph Neural Networks(GNNs) on node classification typically perform admirably on homophilic nodes.
We then propose a rigorous, non-i.i.d PAC-Bayesian generalization bound for GNNs, revealing reasons for the performance disparity.
arXiv Detail & Related papers (2023-06-02T07:46:20Z) - On the Expressiveness and Generalization of Hypergraph Neural Networks [77.65788763444877]
This extended abstract describes a framework for analyzing the expressiveness, learning, and (structural) generalization of hypergraph neural networks (HyperGNNs)
Specifically, we focus on how HyperGNNs can learn from finite datasets and generalize structurally to graph reasoning problems of arbitrary input sizes.
arXiv Detail & Related papers (2023-03-09T18:42:18Z) - How Powerful are Spectral Graph Neural Networks [9.594432031144715]
Spectral Graph Neural Network is a kind of Graph Neural Network based on graph signal filters.
We first prove that even spectral GNNs without nonlinearity can produce arbitrary graph signals.
We also establish a connection between the expressive power of spectral GNNs and Graph Isomorphism (GI) testing.
arXiv Detail & Related papers (2022-05-23T10:22:12Z) - Representation Power of Graph Neural Networks: Improved Expressivity via
Algebraic Analysis [124.97061497512804]
We show that standard Graph Neural Networks (GNNs) produce more discriminative representations than the Weisfeiler-Lehman (WL) algorithm.
We also show that simple convolutional architectures with white inputs, produce equivariant features that count the closed paths in the graph.
arXiv Detail & Related papers (2022-05-19T18:40:25Z) - Improving Graph Neural Network Expressivity via Subgraph Isomorphism
Counting [63.04999833264299]
"Graph Substructure Networks" (GSN) is a topologically-aware message passing scheme based on substructure encoding.
We show that it is strictly more expressive than the Weisfeiler-Leman (WL) graph isomorphism test.
We perform an extensive evaluation on graph classification and regression tasks and obtain state-of-the-art results in diverse real-world settings.
arXiv Detail & Related papers (2020-06-16T15:30:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.