Counterfactual Explanations for Graph Classification Through the Lenses
of Density
- URL: http://arxiv.org/abs/2307.14849v1
- Date: Thu, 27 Jul 2023 13:28:18 GMT
- Title: Counterfactual Explanations for Graph Classification Through the Lenses
of Density
- Authors: Carlo Abrate, Giulia Preti, Francesco Bonchi
- Abstract summary: We define a general density-based counterfactual search framework to generate instance-level counterfactual explanations for graph classifiers.
We show two specific instantiations of this general framework: a method that searches for counterfactual graphs by opening or closing triangles, and a method driven by maximal cliques.
We evaluate the effectiveness of our approaches in 7 brain network datasets and compare the counterfactual statements generated according to several widely-used metrics.
- Score: 19.53018353016675
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Counterfactual examples have emerged as an effective approach to produce
simple and understandable post-hoc explanations. In the context of graph
classification, previous work has focused on generating counterfactual
explanations by manipulating the most elementary units of a graph, i.e.,
removing an existing edge, or adding a non-existing one. In this paper, we
claim that such language of explanation might be too fine-grained, and turn our
attention to some of the main characterizing features of real-world complex
networks, such as the tendency to close triangles, the existence of recurring
motifs, and the organization into dense modules. We thus define a general
density-based counterfactual search framework to generate instance-level
counterfactual explanations for graph classifiers, which can be instantiated
with different notions of dense substructures. In particular, we show two
specific instantiations of this general framework: a method that searches for
counterfactual graphs by opening or closing triangles, and a method driven by
maximal cliques. We also discuss how the general method can be instantiated to
exploit any other notion of dense substructures, including, for instance, a
given taxonomy of nodes. We evaluate the effectiveness of our approaches in 7
brain network datasets and compare the counterfactual statements generated
according to several widely-used metrics. Results confirm that adopting a
semantic-relevant unit of change like density is essential to define versatile
and interpretable counterfactual explanation methods.
Related papers
- Structure Your Data: Towards Semantic Graph Counterfactuals [1.8817715864806608]
Counterfactual explanations (CEs) based on concepts are explanations that consider alternative scenarios to understand which high-level semantic features contributed to model predictions.
In this work, we propose CEs based on the semantic graphs accompanying input data to achieve more descriptive, accurate, and human-aligned explanations.
arXiv Detail & Related papers (2024-03-11T08:40:37Z) - EntailE: Introducing Textual Entailment in Commonsense Knowledge Graph
Completion [54.12709176438264]
Commonsense knowledge graphs (CSKGs) utilize free-form text to represent named entities, short phrases, and events as their nodes.
Current methods leverage semantic similarities to increase the graph density, but the semantic plausibility of the nodes and their relations are under-explored.
We propose to adopt textual entailment to find implicit entailment relations between CSKG nodes, to effectively densify the subgraph connecting nodes within the same conceptual class.
arXiv Detail & Related papers (2024-02-15T02:27:23Z) - Homomorphism Counts for Graph Neural Networks: All About That Basis [8.25219440625445]
We argue for a more fine-grained approach, which incorporates the homomorphism counts of all structures in the basis'' of the target pattern.
This yields strictly more expressive architectures without incurring any additional overhead in terms of computational complexity.
arXiv Detail & Related papers (2024-02-13T16:57:06Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - Joint Language Semantic and Structure Embedding for Knowledge Graph
Completion [66.15933600765835]
We propose to jointly embed the semantics in the natural language description of the knowledge triplets with their structure information.
Our method embeds knowledge graphs for the completion task via fine-tuning pre-trained language models.
Our experiments on a variety of knowledge graph benchmarks have demonstrated the state-of-the-art performance of our method.
arXiv Detail & Related papers (2022-09-19T02:41:02Z) - Topological Representations of Local Explanations [8.559625821116454]
We propose a topology-based framework to extract a simplified representation from a set of local explanations.
We demonstrate that our framework can not only reliably identify differences between explainability techniques but also provides stable representations.
arXiv Detail & Related papers (2022-01-06T17:46:45Z) - Learning the Implicit Semantic Representation on Graph-Structured Data [57.670106959061634]
Existing representation learning methods in graph convolutional networks are mainly designed by describing the neighborhood of each node as a perceptual whole.
We propose a Semantic Graph Convolutional Networks (SGCN) that explores the implicit semantics by learning latent semantic-paths in graphs.
arXiv Detail & Related papers (2021-01-16T16:18:43Z) - Towards Efficient Scene Understanding via Squeeze Reasoning [71.1139549949694]
We propose a novel framework called Squeeze Reasoning.
Instead of propagating information on the spatial map, we first learn to squeeze the input feature into a channel-wise global vector.
We show that our approach can be modularized as an end-to-end trained block and can be easily plugged into existing networks.
arXiv Detail & Related papers (2020-11-06T12:17:01Z) - MAIRE -- A Model-Agnostic Interpretable Rule Extraction Procedure for
Explaining Classifiers [5.02231401459109]
The paper introduces a novel framework for extracting model-agnostic human interpretable rules to explain a classifier's output.
The framework is model agnostic, can be applied to any arbitrary classifier, and all types of attributes (including continuous, ordered, and unordered discrete)
arXiv Detail & Related papers (2020-11-03T06:53:06Z) - Structured Graph Learning for Clustering and Semi-supervised
Classification [74.35376212789132]
We propose a graph learning framework to preserve both the local and global structure of data.
Our method uses the self-expressiveness of samples to capture the global structure and adaptive neighbor approach to respect the local structure.
Our model is equivalent to a combination of kernel k-means and k-means methods under certain condition.
arXiv Detail & Related papers (2020-08-31T08:41:20Z) - Equivariant Maps for Hierarchical Structures [17.931059591895984]
We show that symmetry of a hierarchical structure is the "wreath product" of symmetries of the building blocks.
By voxelizing the point cloud, we impose a hierarchy of translation and permutation symmetries on the data.
We report state-of-the-art on Semantic3D, S3DIS, and vKITTI, that include some of the largest real-world point-cloud benchmarks.
arXiv Detail & Related papers (2020-06-05T18:42:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.