Preserve, Promote, or Attack? GNN Explanation via Topology Perturbation
- URL: http://arxiv.org/abs/2103.13944v1
- Date: Thu, 25 Mar 2021 16:04:08 GMT
- Title: Preserve, Promote, or Attack? GNN Explanation via Topology Perturbation
- Authors: Yi Sun, Abel Valente, Sijia Liu, Dakuo Wang
- Abstract summary: We develop a multi-purpose interpretation framework by acquiring a mask that indicates topology perturbations of the input graphs.
We pack the framework into an interactive visualization system (GNNViz) which can fulfill multiple purposes: Preserve,Promote, or Attack GNN's predictions.
- Score: 24.665468294430216
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prior works on formalizing explanations of a graph neural network (GNN) focus
on a single use case - to preserve the prediction results through identifying
important edges and nodes. In this paper, we develop a multi-purpose
interpretation framework by acquiring a mask that indicates topology
perturbations of the input graphs. We pack the framework into an interactive
visualization system (GNNViz) which can fulfill multiple purposes:
Preserve,Promote, or Attack GNN's predictions. We illustrate our approach's
novelty and effectiveness with three case studies: First, GNNViz can assist non
expert users to easily explore the relationship between graph topology and
GNN's decision (Preserve), or to manipulate the prediction (Promote or Attack)
for an image classification task on MS-COCO; Second, on the Pokec social
network dataset, our framework can uncover unfairness and demographic biases;
Lastly, it compares with state-of-the-art GNN explainer baseline on a synthetic
dataset.
Related papers
- Rethinking Propagation for Unsupervised Graph Domain Adaptation [17.443218657417454]
Unlabelled Graph Domain Adaptation (UGDA) aims to transfer knowledge from a labelled source graph to an unsupervised target graph.
We propose a simple yet effective approach called A2GNN for graph domain adaptation.
arXiv Detail & Related papers (2024-02-08T13:24:57Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Visualizing Graph Neural Networks with CorGIE: Corresponding a Graph to
Its Embedding [16.80197065484465]
We propose an approach to corresponding an input graph to its node embedding (aka latent space)
We develop an interactive multi-view interface called CorGIE to instantiate the abstraction.
We present how to use CorGIE in two usage scenarios, and conduct a case study with two GNN experts.
arXiv Detail & Related papers (2021-06-24T08:59:53Z) - Interpreting Graph Neural Networks for NLP With Differentiable Edge
Masking [63.49779304362376]
Graph neural networks (GNNs) have become a popular approach to integrating structural inductive biases into NLP models.
We introduce a post-hoc method for interpreting the predictions of GNNs which identifies unnecessary edges.
We show that we can drop a large proportion of edges without deteriorating the performance of the model.
arXiv Detail & Related papers (2020-10-01T17:51:19Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Higher-Order Explanations of Graph Neural Networks via Relevant Walks [3.1510406584101776]
Graph Neural Networks (GNNs) are a popular approach for predicting graph structured data.
In this paper, we show that GNNs can in fact be naturally explained using higher-order expansions.
We extract practically relevant insights on sentiment analysis of text data, structure-property relationships in quantum chemistry, and image classification.
arXiv Detail & Related papers (2020-06-05T17:59:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.