CLEAR: Generative Counterfactual Explanations on Graphs
- URL: http://arxiv.org/abs/2210.08443v1
- Date: Sun, 16 Oct 2022 04:35:32 GMT
- Title: CLEAR: Generative Counterfactual Explanations on Graphs
- Authors: Jing Ma, Ruocheng Guo, Saumitra Mishra, Aidong Zhang, Jundong Li
- Abstract summary: We study the problem of counterfactual explanation generation on graphs.
A few studies have explored counterfactual explanations on graphs, but many challenges of this problem are still not well-addressed.
We propose a novel framework CLEAR which aims to generate counterfactual explanations on graphs for graph-level prediction models.
- Score: 60.30009215290265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Counterfactual explanations promote explainability in machine learning models
by answering the question "how should an input instance be perturbed to obtain
a desired predicted label?". The comparison of this instance before and after
perturbation can enhance human interpretation. Most existing studies on
counterfactual explanations are limited in tabular data or image data. In this
work, we study the problem of counterfactual explanation generation on graphs.
A few studies have explored counterfactual explanations on graphs, but many
challenges of this problem are still not well-addressed: 1) optimizing in the
discrete and disorganized space of graphs; 2) generalizing on unseen graphs;
and 3) maintaining the causality in the generated counterfactuals without prior
knowledge of the causal model. To tackle these challenges, we propose a novel
framework CLEAR which aims to generate counterfactual explanations on graphs
for graph-level prediction models. Specifically, CLEAR leverages a graph
variational autoencoder based mechanism to facilitate its optimization and
generalization, and promotes causality by leveraging an auxiliary variable to
better identify the underlying causal model. Extensive experiments on both
synthetic and real-world graphs validate the superiority of CLEAR over the
state-of-the-art methods in different aspects.
Related papers
- Motif-Consistent Counterfactuals with Adversarial Refinement for Graph-Level Anomaly Detection [30.618065157205507]
We propose a novel approach, Motif-consistent Counterfactuals with Adversarial Refinement (MotifCAR) for graph-level anomaly detection.
The model combines the motif of one graph, the core subgraph containing the identification (category) information, and the contextual subgraph of another graph to produce a raw counterfactual graph.
MotifCAR can generate high-quality counterfactual graphs.
arXiv Detail & Related papers (2024-07-18T08:04:57Z) - Self-Explainable Temporal Graph Networks based on Graph Information Bottleneck [21.591458816091126]
Temporal Graph Networks (TGNN) have the ability to capture both the graph topology and dynamic dependencies of interactions within a graph over time.
There has been a growing need to explain the predictions of TGNN models due to the difficulty in identifying how past events influence their predictions.
This is the first work that simultaneously performs prediction and explanation for temporal graphs in an end-to-end manner.
arXiv Detail & Related papers (2024-06-19T04:55:34Z) - Towards Self-Interpretable Graph-Level Anomaly Detection [73.1152604947837]
Graph-level anomaly detection (GLAD) aims to identify graphs that exhibit notable dissimilarity compared to the majority in a collection.
We propose a Self-Interpretable Graph aNomaly dETection model ( SIGNET) that detects anomalous graphs as well as generates informative explanations simultaneously.
arXiv Detail & Related papers (2023-10-25T10:10:07Z) - Robust Ante-hoc Graph Explainer using Bilevel Optimization [0.7999703756441758]
We propose RAGE, a novel and flexible ante-hoc explainer for graph neural networks.
RAGE can effectively identify molecular substructures that contain the full information needed for prediction.
Our experiments on various molecular classification tasks show that RAGE explanations are better than existing post-hoc and ante-hoc approaches.
arXiv Detail & Related papers (2023-05-25T05:50:38Z) - Beyond spectral gap: The role of the topology in decentralized learning [58.48291921602417]
In data-parallel optimization of machine learning models, workers collaborate to improve their estimates of the model.
This paper aims to paint an accurate picture of sparsely-connected distributed optimization when workers share the same data distribution.
Our theory matches empirical observations in deep learning, and accurately describes the relative merits of different graph topologies.
arXiv Detail & Related papers (2022-06-07T08:19:06Z) - Towards Explanation for Unsupervised Graph-Level Representation Learning [108.31036962735911]
Existing explanation methods focus on the supervised settings, eg, node classification and graph classification, while the explanation for unsupervised graph-level representation learning is still unexplored.
In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, textitUnsupervised Subgraph Information Bottleneck (USIB)
We also theoretically analyze the connection between graph representations and explanatory subgraphs on the label space, which reveals that the robustness of representations benefit the fidelity of explanatory subgraphs.
arXiv Detail & Related papers (2022-05-20T02:50:15Z) - Explanation Graph Generation via Pre-trained Language Models: An
Empirical Study with Contrastive Learning [84.35102534158621]
We study pre-trained language models that generate explanation graphs in an end-to-end manner.
We propose simple yet effective ways of graph perturbations via node and edge edit operations.
Our methods lead to significant improvements in both structural and semantic accuracy of explanation graphs.
arXiv Detail & Related papers (2022-04-11T00:58:27Z) - OrphicX: A Causality-Inspired Latent Variable Model for Interpreting
Graph Neural Networks [42.539085765796976]
This paper proposes a new eXplanation framework, called OrphicX, for generating causal explanations for graph neural networks (GNNs)
We construct a distinct generative model and design an objective function that encourages the generative model to produce causal, compact, and faithful explanations.
We show that OrphicX can effectively identify the causal semantics for generating causal explanations, significantly outperforming its alternatives.
arXiv Detail & Related papers (2022-03-29T03:08:33Z) - ExplaGraphs: An Explanation Graph Generation Task for Structured
Commonsense Reasoning [65.15423587105472]
We present a new generative and structured commonsense-reasoning task (and an associated dataset) of explanation graph generation for stance prediction.
Specifically, given a belief and an argument, a model has to predict whether the argument supports or counters the belief and also generate a commonsense-augmented graph that serves as non-trivial, complete, and unambiguous explanation for the predicted stance.
A significant 83% of our graphs contain external commonsense nodes with diverse structures and reasoning depths.
arXiv Detail & Related papers (2021-04-15T17:51:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.