Calibrate and Boost Logical Expressiveness of GNN Over Multi-Relational
and Temporal Graphs
- URL: http://arxiv.org/abs/2311.01647v1
- Date: Fri, 3 Nov 2023 00:33:24 GMT
- Title: Calibrate and Boost Logical Expressiveness of GNN Over Multi-Relational
and Temporal Graphs
- Authors: Yeyuan Chen and Dingmin Wang
- Abstract summary: We investigate $mathcalFOC$NN, a fragment of first-order logic with two variables and counting quantifiers.
We propose a simple graph transformation technique, akin to a preprocessing step, which can be executed in linear time.
Our results consistently demonstrate that R$2$-GNN with the graph transformation outperforms the baseline methods on both synthetic and real-world datasets.
- Score: 8.095679736030146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a powerful framework for graph representation learning, Graph Neural
Networks (GNNs) have garnered significant attention in recent years. However,
to the best of our knowledge, there has been no formal analysis of the logical
expressiveness of GNNs as Boolean node classifiers over multi-relational
graphs, where each edge carries a specific relation type. In this paper, we
investigate $\mathcal{FOC}_2$, a fragment of first-order logic with two
variables and counting quantifiers. On the negative side, we demonstrate that
the R$^2$-GNN architecture, which extends the local message passing GNN by
incorporating global readout, fails to capture $\mathcal{FOC}_2$ classifiers in
the general case. Nevertheless, on the positive side, we establish that
R$^2$-GNNs models are equivalent to $\mathcal{FOC}_2$ classifiers under certain
restricted yet reasonable scenarios. To address the limitations of R$^2$-GNNs
regarding expressiveness, we propose a simple graph transformation technique,
akin to a preprocessing step, which can be executed in linear time. This
transformation enables R$^2$-GNNs to effectively capture any $\mathcal{FOC}_2$
classifiers when applied to the "transformed" input graph. Moreover, we extend
our analysis of expressiveness and graph transformation to temporal graphs,
exploring several temporal GNN architectures and providing an expressiveness
hierarchy for them. To validate our findings, we implement R$^2$-GNNs and the
graph transformation technique and conduct empirical tests in node
classification tasks against various well-known GNN architectures that support
multi-relational or temporal graphs. Our experimental results consistently
demonstrate that R$^2$-GNN with the graph transformation outperforms the
baseline methods on both synthetic and real-world datasets
Related papers
- Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - $p$-Laplacian Based Graph Neural Networks [27.747195341003263]
Graph networks (GNNs) have demonstrated superior performance for semi-supervised node classification on graphs.
We propose a new $p$-Laplacian based GNN model, termed as $p$GNN, whose message passing mechanism is derived from a discrete regularization framework.
We show that the new message passing mechanism works simultaneously as low-pass and high-pass filters, thus making $p$GNNs effective on both homophilic and heterophilic graphs.
arXiv Detail & Related papers (2021-11-14T13:16:28Z) - VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
Vector Quantization [70.8567058758375]
VQ-GNN is a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance.
Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
arXiv Detail & Related papers (2021-10-27T11:48:50Z) - Graph Neural Networks with Local Graph Parameters [1.8600631687568656]
Local graph parameters can be added to any Graph Neural Networks (GNNs) architecture.
Our results connect GNNs with deep results in finite model theory and finite variable logics.
arXiv Detail & Related papers (2021-06-12T07:43:51Z) - Rethinking Graph Regularization for Graph Neural Networks [21.32758655943999]
We show that graph Laplacian regularization brings little-to-no benefit to existing graph neural networks (GNNs)
We propose a simple but non-trivial variant of graph Laplacian regularization, called propagation-regularization (P-reg)
We demonstrate that P-reg can effectively boost the performance of existing GNN models on both node-level and graph-level tasks.
arXiv Detail & Related papers (2020-09-04T07:04:51Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Expressive Power of Invariant and Equivariant Graph Neural Networks [10.419350129060598]
We show that Folklore Graph Neural Networks (FGNN) are the most expressive architectures proposed so far for a given tensor order.
FGNNs are able to learn how to solve the problem, leading to much better average performances than existing algorithms.
arXiv Detail & Related papers (2020-06-28T16:35:45Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z) - Gated Graph Recurrent Neural Networks [176.3960927323358]
We introduce Graph Recurrent Neural Networks (GRNNs) as a general learning framework for graph processes.
To address the problem of vanishing gradients, we put forward GRNNs with three different gating mechanisms: time, node and edge gates.
The numerical results also show that GRNNs outperform GNNs and RNNs, highlighting the importance of taking both the temporal and graph structures of a graph process into account.
arXiv Detail & Related papers (2020-02-03T22:35:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.