Revealing Combinatorial Reasoning of GNNs via Graph Concept Bottleneck Layer
- URL: http://arxiv.org/abs/2603.02025v1
- Date: Mon, 02 Mar 2026 16:07:24 GMT
- Title: Revealing Combinatorial Reasoning of GNNs via Graph Concept Bottleneck Layer
- Authors: Yue Niu, Zhaokai Sun, Jiayi Yang, Xiaofeng Cao, Rui Fan, Xin Sun, Hanli Wang, Wei Ye,
- Abstract summary: We develop a graph concept layer that can be integrated into any GNN architectures.<n>The predicted concept scores are projected to class labels by the selected discriminative layer.<n>It enforces the sparse reasoning of GNNs' predictions to fit the soft logical rule over graph concepts.
- Score: 28.886850252681754
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite their success in various domains, the growing dependence on GNNs raises a critical concern about the nature of the combinatorial reasoning underlying their predictions, which is often hidden within their black-box architectures. Addressing this challenge requires understanding how GNNs translate topological patterns into logical rules. However, current works only uncover the hard logical rules over graph concepts, which cannot quantify the contribution of each concept to prediction. Moreover, they are post-hoc interpretable methods that generate explanations after model training and may not accurately reflect the true combinatorial reasoning of GNNs, since they approximate it with a surrogate. In this work, we develop a graph concept bottleneck layer that can be integrated into any GNN architectures to guide them to predict the selected discriminative global graph concepts. The predicted concept scores are further projected to class labels by a sparse linear layer. It enforces the combinatorial reasoning of GNNs' predictions to fit the soft logical rule over graph concepts and thus can quantify the contribution of each concept. To further improve the quality of the concept bottleneck, we treat concepts as "graph words" and graphs as "graph sentences", and leverage language models to learn graph concept embeddings. Extensive experiments on multiple datasets show that our method GCBMs achieve state-of-the-art performance both in classification and interpretability.
Related papers
- Extracting Interpretable Logic Rules from Graph Neural Networks [7.262955921646328]
Graph neural networks (GNNs) operate over both input feature spaces and graph structures.<n>We propose a novel framework, LOGI CXGNN, for extracting interpretable logic rules from GNNs.<n> LOGI CXGNN is model-agnostic, efficient, and data-driven, eliminating the need for predefined concepts.
arXiv Detail & Related papers (2025-03-25T09:09:46Z) - Global Graph Counterfactual Explanation: A Subgraph Mapping Approach [54.42907350881448]
Graph Neural Networks (GNNs) have been widely deployed in various real-world applications.
Counterfactual explanation aims to find minimum perturbations on input graphs that change the GNN predictions.
We propose GlobalGCE, a novel global-level graph counterfactual explanation method.
arXiv Detail & Related papers (2024-10-25T21:39:05Z) - Incorporating Retrieval-based Causal Learning with Information
Bottlenecks for Interpretable Graph Neural Networks [12.892400744247565]
We develop a novel interpretable causal GNN framework that incorporates retrieval-based causal learning with Graph Information Bottleneck (GIB) theory.
We achieve 32.71% higher precision on real-world explanation scenarios with diverse explanation types.
arXiv Detail & Related papers (2024-02-07T09:57:39Z) - Everybody Needs a Little HELP: Explaining Graphs via Hierarchical
Concepts [12.365451175795338]
Graph neural networks (GNNs) have led to breakthroughs in domains such as drug discovery, social network analysis, and travel time estimation.
They lack interpretability which hinders human trust and thereby deployment to settings with high-stakes decisions.
We provide HELP, a novel, inherently interpretable graph pooling approach that reveals how concepts from different GNN layers compose to new ones in later steps.
arXiv Detail & Related papers (2023-11-25T20:06:46Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Encoding Concepts in Graph Neural Networks [6.129235861306906]
We introduce the Concept Module, the first differentiable concept-discovery approach for graph networks.
The proposed approach makes graph networks explainable by design by first discovering graph concepts and then using these to solve the task.
Our results demonstrate that this approach allows graph networks to attain model accuracy comparable with their equivalent vanilla versions.
arXiv Detail & Related papers (2022-07-27T15:34:14Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - Algorithmic Concept-based Explainable Reasoning [0.3149883354098941]
Recent research on graph neural network (GNN) models successfully applied GNNs to classical graph algorithms and optimisation problems.
Key hindrance of these approaches is their lack of explainability, since GNNs are black-box models that cannot be interpreted directly.
We introduce concept-bottleneck GNNs, which rely on a modification to the GNN readout mechanism.
arXiv Detail & Related papers (2021-07-15T17:44:51Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - Interpreting Graph Neural Networks for NLP With Differentiable Edge
Masking [63.49779304362376]
Graph neural networks (GNNs) have become a popular approach to integrating structural inductive biases into NLP models.
We introduce a post-hoc method for interpreting the predictions of GNNs which identifies unnecessary edges.
We show that we can drop a large proportion of edges without deteriorating the performance of the model.
arXiv Detail & Related papers (2020-10-01T17:51:19Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.