From Local to Global: Spectral-Inspired Graph Neural Networks
- URL: http://arxiv.org/abs/2209.12054v1
- Date: Sat, 24 Sep 2022 17:19:00 GMT
- Title: From Local to Global: Spectral-Inspired Graph Neural Networks
- Authors: Ningyuan Huang, Soledad Villar, Carey E. Priebe, Da Zheng, Chengyue
Huang, Lin Yang, Vladimir Braverman
- Abstract summary: Graph Neural Networks (GNNs) are powerful deep learning methods for Non-Euclidean data.
MPNNs are message-passing algorithms that aggregate and combine signals in a local graph neighborhood.
MPNNs can suffer from issues like over-smoothing or over-squashing.
- Score: 28.858773653743075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) are powerful deep learning methods for
Non-Euclidean data. Popular GNNs are message-passing algorithms (MPNNs) that
aggregate and combine signals in a local graph neighborhood. However, shallow
MPNNs tend to miss long-range signals and perform poorly on some heterophilous
graphs, while deep MPNNs can suffer from issues like over-smoothing or
over-squashing. To mitigate such issues, existing works typically borrow
normalization techniques from training neural networks on Euclidean data or
modify the graph structures. Yet these approaches are not well-understood
theoretically and could increase the overall computational complexity. In this
work, we draw inspirations from spectral graph embedding and propose
$\texttt{PowerEmbed}$ -- a simple layer-wise normalization technique to boost
MPNNs. We show $\texttt{PowerEmbed}$ can provably express the top-$k$ leading
eigenvectors of the graph operator, which prevents over-smoothing and is
agnostic to the graph topology; meanwhile, it produces a list of
representations ranging from local features to global signals, which avoids
over-squashing. We apply $\texttt{PowerEmbed}$ in a wide range of simulated and
real graphs and demonstrate its competitive performance, particularly for
heterophilous graphs.
Related papers
- Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - FoSR: First-order spectral rewiring for addressing oversquashing in GNNs [0.0]
Graph neural networks (GNNs) are able to leverage the structure of graph data by passing messages along the edges of the graph.
We propose a computationally efficient algorithm that prevents oversquashing by systematically adding edges to the graph.
We find experimentally that our algorithm outperforms existing graph rewiring methods in several graph classification tasks.
arXiv Detail & Related papers (2022-10-21T07:58:03Z) - Expander Graph Propagation [0.0]
We propose an elegant approach based on propagating information over expander graphs.
We show that EGP is able to address all of the above concerns, while requiring minimal effort to set up.
We believe our analysis paves the way to a novel class of scalable methods to counter oversquashing in GNNs.
arXiv Detail & Related papers (2022-10-06T15:36:37Z) - VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
Vector Quantization [70.8567058758375]
VQ-GNN is a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance.
Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
arXiv Detail & Related papers (2021-10-27T11:48:50Z) - Graph Neural Networks with Local Graph Parameters [1.8600631687568656]
Local graph parameters can be added to any Graph Neural Networks (GNNs) architecture.
Our results connect GNNs with deep results in finite model theory and finite variable logics.
arXiv Detail & Related papers (2021-06-12T07:43:51Z) - Scalable Graph Neural Networks for Heterogeneous Graphs [12.44278942365518]
Graph neural networks (GNNs) are a popular class of parametric model for learning over graph-structured data.
Recent work has argued that GNNs primarily use the graph for feature smoothing, and have shown competitive results on benchmark tasks.
In this work, we ask whether these results can be extended to heterogeneous graphs, which encode multiple types of relationship between different entities.
arXiv Detail & Related papers (2020-11-19T06:03:35Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.