KerGNNs: Interpretable Graph Neural Networks with Graph Kernels
- URL: http://arxiv.org/abs/2201.00491v1
- Date: Mon, 3 Jan 2022 06:16:30 GMT
- Title: KerGNNs: Interpretable Graph Neural Networks with Graph Kernels
- Authors: Aosong Feng, Chenyu You, Shiqiang Wang, and Leandros Tassiulas
- Abstract summary: Graph neural networks (GNNs) have become the state-of-the-art method in downstream graph-related tasks.
We propose a novel GNN framework, termed textit Kernel Graph Neural Networks (KerGNNs)
KerGNNs integrate graph kernels into the message passing process of GNNs.
We show that our method achieves competitive performance compared with existing state-of-the-art methods.
- Score: 14.421535610157093
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graph kernels are historically the most widely-used technique for graph
classification tasks. However, these methods suffer from limited performance
because of the hand-crafted combinatorial features of graphs. In recent years,
graph neural networks (GNNs) have become the state-of-the-art method in
downstream graph-related tasks due to their superior performance. Most GNNs are
based on Message Passing Neural Network (MPNN) frameworks. However, recent
studies show that MPNNs can not exceed the power of the Weisfeiler-Lehman (WL)
algorithm in graph isomorphism test. To address the limitations of existing
graph kernel and GNN methods, in this paper, we propose a novel GNN framework,
termed \textit{Kernel Graph Neural Networks} (KerGNNs), which integrates graph
kernels into the message passing process of GNNs. Inspired by convolution
filters in convolutional neural networks (CNNs), KerGNNs adopt trainable hidden
graphs as graph filters which are combined with subgraphs to update node
embeddings using graph kernels. In addition, we show that MPNNs can be viewed
as special cases of KerGNNs. We apply KerGNNs to multiple graph-related tasks
and use cross-validation to make fair comparisons with benchmarks. We show that
our method achieves competitive performance compared with existing
state-of-the-art methods, demonstrating the potential to increase the
representation ability of GNNs. We also show that the trained graph filters in
KerGNNs can reveal the local graph structures of the dataset, which
significantly improves the model interpretability compared with conventional
GNN models.
Related papers
- Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - Graph Coordinates and Conventional Neural Networks -- An Alternative for
Graph Neural Networks [0.10923877073891444]
We propose Topology Coordinate Neural Network (TCNN) and Directional Virtual Coordinate Neural Network (DVCNN) as novel alternatives to message passing GNNs.
TCNN and DVCNN achieve competitive or superior performance to message passing GNNs.
Our work expands the toolbox of techniques for graph-based machine learning.
arXiv Detail & Related papers (2023-12-03T10:14:10Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Measuring and Improving the Use of Graph Information in Graph Neural
Networks [38.41049128525036]
Graph neural networks (GNNs) have been widely used for representation learning on graph data.
This paper introduces a context-surrounding GNN framework and proposes two smoothness metrics to measure the quantity and quality of information obtained from graph data.
A new GNN model, called CS-GNN, is then designed to improve the use of graph information based on the smoothness values of a graph.
arXiv Detail & Related papers (2022-06-27T10:27:28Z) - Transferability Properties of Graph Neural Networks [125.71771240180654]
Graph neural networks (GNNs) are provably successful at learning representations from data supported on moderate-scale graphs.
We study the problem of training GNNs on graphs of moderate size and transferring them to large-scale graphs.
Our results show that (i) the transference error decreases with the graph size, and (ii) that graph filters have a transferability-discriminability tradeoff that in GNNs is alleviated by the scattering behavior of the nonlinearity.
arXiv Detail & Related papers (2021-12-09T00:08:09Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - AutoGraph: Automated Graph Neural Network [45.94642721490744]
We propose a method to automate the deep Graph Neural Networks (GNNs) design.
In our proposed method, we add a new type of skip connection to the GNNs search space to encourage feature reuse.
We also allow our evolutionary algorithm to increase the layers of GNNs during the evolution to generate deeper networks.
arXiv Detail & Related papers (2020-11-23T09:04:17Z) - Graph Neural Networks: Architectures, Stability and Transferability [176.3960927323358]
Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
arXiv Detail & Related papers (2020-08-04T18:57:36Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.