Learnt Sparsification for Interpretable Graph Neural Networks
- URL: http://arxiv.org/abs/2106.12920v1
- Date: Wed, 23 Jun 2021 16:04:25 GMT
- Title: Learnt Sparsification for Interpretable Graph Neural Networks
- Authors: Mandeep Rathee, Zijian Zhang, Thorben Funke, Megha Khosla, and Avishek
Anand
- Abstract summary: We propose a novel method called Kedge for explicitly sparsifying the underlying graph by removing unnecessary neighbors.
Kedge learns edge masks in a modular fashion trained with any GNN allowing for gradient based optimization.
We show that Kedge effectively counters the over-smoothing phenomena in deep GNNs by maintaining good task performance with increasing GNN layers.
- Score: 5.527927312898106
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) have achieved great success on various tasks and
fields that require relational modeling. GNNs aggregate node features using the
graph structure as inductive biases resulting in flexible and powerful models.
However, GNNs remain hard to interpret as the interplay between node features
and graph structure is only implicitly learned. In this paper, we propose a
novel method called Kedge for explicitly sparsifying the underlying graph by
removing unnecessary neighbors. Our key idea is based on a tractable method for
sparsification using the Hard Kumaraswamy distribution that can be used in
conjugation with any GNN model. Kedge learns edge masks in a modular fashion
trained with any GNN allowing for gradient based optimization in an end-to-end
fashion. We demonstrate through extensive experiments that our model Kedge can
prune a large proportion of the edges with only a minor effect on the test
accuracy. Specifically, in the PubMed dataset, Kedge learns to drop more than
80% of the edges with an accuracy drop of merely 2% showing that graph
structure has only a small contribution in comparison to node features.
Finally, we also show that Kedge effectively counters the over-smoothing
phenomena in deep GNNs by maintaining good task performance with increasing GNN
layers.
Related papers
- Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Exploiting Neighbor Effect: Conv-Agnostic GNNs Framework for Graphs with
Heterophily [58.76759997223951]
We propose a new metric based on von Neumann entropy to re-examine the heterophily problem of GNNs.
We also propose a Conv-Agnostic GNN framework (CAGNNs) to enhance the performance of most GNNs on heterophily datasets.
arXiv Detail & Related papers (2022-03-19T14:26:43Z) - GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph
Neural Networks [15.448462928073635]
Graph neural networks (GNNs) have been increasingly deployed in various applications that involve learning on non-Euclidean data.
Recent studies show that GNNs are vulnerable to graph adversarial attacks.
We propose GARNET, a scalable spectral method to boost the adversarial robustness of GNN models.
arXiv Detail & Related papers (2022-01-30T06:32:44Z) - KerGNNs: Interpretable Graph Neural Networks with Graph Kernels [14.421535610157093]
Graph neural networks (GNNs) have become the state-of-the-art method in downstream graph-related tasks.
We propose a novel GNN framework, termed textit Kernel Graph Neural Networks (KerGNNs)
KerGNNs integrate graph kernels into the message passing process of GNNs.
We show that our method achieves competitive performance compared with existing state-of-the-art methods.
arXiv Detail & Related papers (2022-01-03T06:16:30Z) - Network In Graph Neural Network [9.951298152023691]
We present a model-agnostic methodology that allows arbitrary GNN models to increase their model capacity by making the model deeper.
Instead of adding or widening GNN layers, NGNN deepens a GNN model by inserting non-linear feedforward neural network layer(s) within each GNN layer.
arXiv Detail & Related papers (2021-11-23T03:58:56Z) - Edgeless-GNN: Unsupervised Inductive Edgeless Network Embedding [7.391641422048645]
We study the problem of embedding edgeless nodes such as users who newly enter the underlying network.
We propose Edgeless-GNN, a new framework that enables GNNs to generate node embeddings even for edgeless nodes through unsupervised inductive learning.
arXiv Detail & Related papers (2021-04-12T06:37:31Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Boost then Convolve: Gradient Boosting Meets Graph Neural Networks [6.888700669980625]
We show that gradient boosted decision trees (GBDT) often outperform other machine learning methods when faced with heterogeneous data.
We propose a novel architecture that trains GBDT and GNN jointly to get the best of both worlds.
Our model benefits from end-to-end optimization by allowing new trees to fit the gradient updates of GNN.
arXiv Detail & Related papers (2021-01-21T10:46:41Z) - Interpreting Graph Neural Networks for NLP With Differentiable Edge
Masking [63.49779304362376]
Graph neural networks (GNNs) have become a popular approach to integrating structural inductive biases into NLP models.
We introduce a post-hoc method for interpreting the predictions of GNNs which identifies unnecessary edges.
We show that we can drop a large proportion of edges without deteriorating the performance of the model.
arXiv Detail & Related papers (2020-10-01T17:51:19Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.