COMRECGC: Global Graph Counterfactual Explainer through Common Recourse
- URL: http://arxiv.org/abs/2505.07081v2
- Date: Tue, 13 May 2025 02:51:33 GMT
- Title: COMRECGC: Global Graph Counterfactual Explainer through Common Recourse
- Authors: Gregoire Fournier, Sourav Medya,
- Abstract summary: Graph neural networks (GNNs) have been widely used in various domains such as social networks, molecular biology, or recommendation systems.<n>Explanations of the GNNs' predictions can be categorized into two types--factual and counterfactual.<n>We formalize the common recourse explanation problem, and design an effective algorithm, COMRECGC, to solve it.
- Score: 3.2752005091619076
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) have been widely used in various domains such as social networks, molecular biology, or recommendation systems. Concurrently, different explanations methods of GNNs have arisen to complement its black-box nature. Explanations of the GNNs' predictions can be categorized into two types--factual and counterfactual. Given a GNN trained on binary classification into ''accept'' and ''reject'' classes, a global counterfactual explanation consists in generating a small set of ''accept'' graphs relevant to all of the input ''reject'' graphs. The transformation of a ''reject'' graph into an ''accept'' graph is called a recourse. A common recourse explanation is a small set of recourse, from which every ''reject'' graph can be turned into an ''accept'' graph. Although local counterfactual explanations have been studied extensively, the problem of finding common recourse for global counterfactual explanation remains unexplored, particularly for GNNs. In this paper, we formalize the common recourse explanation problem, and design an effective algorithm, COMRECGC, to solve it. We benchmark our algorithm against strong baselines on four different real-world graphs datasets and demonstrate the superior performance of COMRECGC against the competitors. We also compare the common recourse explanations to the graph counterfactual explanation, showing that common recourse explanations are either comparable or superior, making them worth considering for applications such as drug discovery or computational biology.
Related papers
- Global Graph Counterfactual Explanation: A Subgraph Mapping Approach [54.42907350881448]
Graph Neural Networks (GNNs) have been widely deployed in various real-world applications.
Counterfactual explanation aims to find minimum perturbations on input graphs that change the GNN predictions.
We propose GlobalGCE, a novel global-level graph counterfactual explanation method.
arXiv Detail & Related papers (2024-10-25T21:39:05Z) - Explainable Graph Neural Networks Under Fire [69.15708723429307]
Graph neural networks (GNNs) usually lack interpretability due to their complex computational behavior and the abstract nature of graphs.
Most GNN explanation methods work in a post-hoc manner and provide explanations in the form of a small subset of important edges and/or nodes.
In this paper we demonstrate that these explanations can unfortunately not be trusted, as common GNN explanation methods turn out to be highly susceptible to adversarial perturbations.
arXiv Detail & Related papers (2024-06-10T16:09:16Z) - View-based Explanations for Graph Neural Networks [27.19300566616961]
We propose GVEX, a novel paradigm that generates Graph Views for EXplanation.
We show that this strategy provides an approximation ratio of 1/2.
Our second algorithm performs a single-pass to an input node stream in batches to incrementally maintain explanation views.
arXiv Detail & Related papers (2024-01-04T06:20:24Z) - Global Counterfactual Explainer for Graph Neural Networks [8.243944711755617]
Graph neural networks (GNNs) find applications in various domains such as computational biology, natural language processing, and computer security.
There is an increasing need to explain GNN predictions since GNNs are black-box machine learning models.
Existing methods for counterfactual explanation of GNNs are limited to instance-specific local reasoning.
arXiv Detail & Related papers (2022-10-21T02:46:35Z) - CLEAR: Generative Counterfactual Explanations on Graphs [60.30009215290265]
We study the problem of counterfactual explanation generation on graphs.
A few studies have explored counterfactual explanations on graphs, but many challenges of this problem are still not well-addressed.
We propose a novel framework CLEAR which aims to generate counterfactual explanations on graphs for graph-level prediction models.
arXiv Detail & Related papers (2022-10-16T04:35:32Z) - Global Explainability of GNNs via Logic Combination of Learned Concepts [11.724402780594257]
We propose GLGExplainer (Global Logic-based GNN Explainer), the first Global Explainer capable of generating explanations as arbitrary combinations of learned graphical concepts.
GLGExplainer provides accurate and human-interpretable global explanations that are perfectly aligned with ground-truth explanations.
Extracted formulas are faithful to the model predictions, to the point of providing insights into some occasionally incorrect rules learned by the model.
arXiv Detail & Related papers (2022-10-13T16:30:03Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.