A Survey on Graph Counterfactual Explanations: Definitions, Methods, Evaluation, and Research Challenges
- URL: http://arxiv.org/abs/2210.12089v3
- Date: Tue, 11 Jun 2024 11:18:57 GMT
- Title: A Survey on Graph Counterfactual Explanations: Definitions, Methods, Evaluation, and Research Challenges
- Authors: Mario Alfonso Prado-Romero, Bardh Prenkaj, Giovanni Stilo, Fosca Giannotti,
- Abstract summary: Graph Neural Networks (GNNs) perform well in community detection and molecule classification.
Counterfactual Explanations (CE) provide counter-examples to overcome the transparency limitations of black-box models.
- Score: 9.206590881401528
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) perform well in community detection and molecule classification. Counterfactual Explanations (CE) provide counter-examples to overcome the transparency limitations of black-box models. Due to the growing attention in graph learning, we focus on the concepts of CE for GNNs. We analysed the SoA to provide a taxonomy, a uniform notation, and the benchmarking datasets and evaluation metrics. We discuss fourteen methods, their evaluation protocols, twenty-two datasets, and nineteen metrics. We integrated the majority of methods into the GRETEL library to conduct an empirical evaluation to understand their strengths and pitfalls. We highlight open challenges and future work.
Related papers
- G-OSR: A Comprehensive Benchmark for Graph Open-Set Recognition [54.45837774534411]
We introduce textbfG-OSR, a benchmark for evaluating Graph Open-Set Recognition (GOSR) methods at both the node and graph levels.
Results offer critical insights into the generalizability and limitations of current GOSR methods.
arXiv Detail & Related papers (2025-03-01T13:02:47Z) - A Survey on Class-Agnostic Counting: Advancements from Reference-Based to Open-World Text-Guided Approaches [6.356364436395916]
We present the first comprehensive review of class-agnostic counting (CAC) methodologies.
We propose a taxonomy to categorize CAC approaches into three paradigms: reference-based, reference-less, and open-world text-guided.
We present results on the FSC-147 dataset, setting a leaderboard using gold-standard metrics, and on the CARPK dataset to assess generalization capabilities.
arXiv Detail & Related papers (2025-01-31T14:47:09Z) - Deep Graph Anomaly Detection: A Survey and New Perspectives [86.84201183954016]
Graph anomaly detection (GAD) aims to identify unusual graph instances (nodes, edges, subgraphs, or graphs)
Deep learning approaches, graph neural networks (GNNs) in particular, have been emerging as a promising paradigm for GAD.
arXiv Detail & Related papers (2024-09-16T03:05:11Z) - KGExplainer: Towards Exploring Connected Subgraph Explanations for Knowledge Graph Completion [18.497296711526268]
We present KGExplainer, a model-agnostic method that identifies connected subgraphs and distills an evaluator to assess them quantitatively.
Experiments on benchmark datasets demonstrate that KGExplainer achieves promising improvement and achieves an optimal ratio of 83.3% in human evaluation.
arXiv Detail & Related papers (2024-04-05T05:02:12Z) - Structure Your Data: Towards Semantic Graph Counterfactuals [1.8817715864806608]
Counterfactual explanations (CEs) based on concepts are explanations that consider alternative scenarios to understand which high-level semantic features contributed to model predictions.
In this work, we propose CEs based on the semantic graphs accompanying input data to achieve more descriptive, accurate, and human-aligned explanations.
arXiv Detail & Related papers (2024-03-11T08:40:37Z) - Overcoming Pitfalls in Graph Contrastive Learning Evaluation: Toward
Comprehensive Benchmarks [60.82579717007963]
We introduce an enhanced evaluation framework designed to more accurately gauge the effectiveness, consistency, and overall capability of Graph Contrastive Learning (GCL) methods.
arXiv Detail & Related papers (2024-02-24T01:47:56Z) - On Discprecncies between Perturbation Evaluations of Graph Neural
Network Attributions [49.8110352174327]
We assess attribution methods from a perspective not previously explored in the graph domain: retraining.
The core idea is to retrain the network on important (or not important) relationships as identified by the attributions.
We run our analysis on four state-of-the-art GNN attribution methods and five synthetic and real-world graph classification datasets.
arXiv Detail & Related papers (2024-01-01T02:03:35Z) - GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network
Explanations [21.997015999698732]
Diverse explainability methods of graph neural networks (GNN) have been developed to highlight the edges and nodes in the graph that contribute the most to the model predictions.
It is not clear yet how to evaluate the correctness of those explanations, whether it is from a human or a model perspective.
We propose GInX-Eval, an evaluation procedure of graph explanations that overcomes the pitfalls of faithfulness.
arXiv Detail & Related papers (2023-09-28T07:56:10Z) - From Cluster Assumption to Graph Convolution: Graph-based Semi-Supervised Learning Revisited [51.24526202984846]
Graph-based semi-supervised learning (GSSL) has long been a hot research topic.
graph convolutional networks (GCNs) have become the predominant techniques for their promising performance.
arXiv Detail & Related papers (2023-09-24T10:10:21Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph
Neural Networks [0.3441021278275805]
GCExplainer is an unsupervised approach for post-hoc discovery and extraction of global concept-based explanations for graph neural networks (GNNs)
We demonstrate the success of our technique on five node classification datasets and two graph classification datasets, showing that we are able to discover and extract high-quality concept representations by putting the human in the loop.
arXiv Detail & Related papers (2021-07-25T20:52:48Z) - Quantifying Challenges in the Application of Graph Representation
Learning [0.0]
We provide an application oriented perspective to a set of popular embedding approaches.
We evaluate their representational power with respect to real-world graph properties.
Our results suggest that "one-to-fit-all" GRL approaches are hard to define in real-world scenarios.
arXiv Detail & Related papers (2020-06-18T03:19:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.