Extracting Interpretable Logic Rules from Graph Neural Networks
- URL: http://arxiv.org/abs/2503.19476v1
- Date: Tue, 25 Mar 2025 09:09:46 GMT
- Title: Extracting Interpretable Logic Rules from Graph Neural Networks
- Authors: Chuqin Geng, Zhaoyue Wang, Ziyu Zhao, Haolin Ye, Xujie Si,
- Abstract summary: Graph neural networks (GNNs) operate over both input feature spaces and graph structures.<n>We propose a novel framework, LOGI CXGNN, for extracting interpretable logic rules from GNNs.<n> LOGI CXGNN is model-agnostic, efficient, and data-driven, eliminating the need for predefined concepts.
- Score: 7.262955921646326
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) operate over both input feature spaces and combinatorial graph structures, making it challenging to understand the rationale behind their predictions. As GNNs gain widespread popularity and demonstrate success across various domains, such as drug discovery, studying their interpretability has become a critical task. To address this, many explainability methods have been proposed, with recent efforts shifting from instance-specific explanations to global concept-based explainability. However, these approaches face several limitations, such as relying on predefined concepts and explaining only a limited set of patterns. To address this, we propose a novel framework, LOGICXGNN, for extracting interpretable logic rules from GNNs. LOGICXGNN is model-agnostic, efficient, and data-driven, eliminating the need for predefined concepts. More importantly, it can serve as a rule-based classifier and even outperform the original neural models. Its interpretability facilitates knowledge discovery, as demonstrated by its ability to extract detailed and accurate chemistry knowledge that is often overlooked by existing methods. Another key advantage of LOGICXGNN is its ability to generate new graph instances in a controlled and transparent manner, offering significant potential for applications such as drug design. We empirically demonstrate these merits through experiments on real-world datasets such as MUTAG and BBBP.
Related papers
- Do graph neural network states contain graph properties? [5.222978725954348]
We present a model explainability pipeline for Graph neural networks (GNNs) employing diagnostic classifiers.<n>This pipeline aims to probe and interpret the learned representations in GNNs across various architectures and datasets.
arXiv Detail & Related papers (2024-11-04T15:26:07Z) - Factorized Explainer for Graph Neural Networks [7.382632811417645]
Graph Neural Networks (GNNs) have received increasing attention due to their ability to learn from graph-structured data.
Post-hoc instance-level explanation methods have been proposed to understand GNN predictions.
We introduce a novel factorized explanation model with theoretical performance guarantees.
arXiv Detail & Related papers (2023-12-09T15:29:45Z) - How Graph Neural Networks Learn: Lessons from Training Dynamics [80.41778059014393]
We study the training dynamics in function space of graph neural networks (GNNs)
We find that the gradient descent optimization of GNNs implicitly leverages the graph structure to update the learned function.
This finding offers new interpretable insights into when and why the learned GNN functions generalize.
arXiv Detail & Related papers (2023-10-08T10:19:56Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Global Concept-Based Interpretability for Graph Neural Networks via
Neuron Analysis [0.0]
Graph neural networks (GNNs) are highly effective on a variety of graph-related tasks.
They lack interpretability and transparency.
Current explainability approaches are typically local and treat GNNs as black-boxes.
We propose a novel approach for producing global explanations for GNNs using neuron-level concepts.
arXiv Detail & Related papers (2022-08-22T21:30:55Z) - FlowX: Towards Explainable Graph Neural Networks via Message Flows [59.025023020402365]
We investigate the explainability of graph neural networks (GNNs) as a step toward elucidating their working mechanisms.
We propose a novel method here, known as FlowX, to explain GNNs by identifying important message flows.
We then propose an information-controlled learning algorithm to train flow scores toward diverse explanation targets.
arXiv Detail & Related papers (2022-06-26T22:48:15Z) - Edge-Level Explanations for Graph Neural Networks by Extending
Explainability Methods for Convolutional Neural Networks [33.20913249848369]
Graph Neural Networks (GNNs) are deep learning models that take graph data as inputs, and they are applied to various tasks such as traffic prediction and molecular property prediction.
We extend explainability methods for CNNs, such as Local Interpretable Model-Agnostic Explanations (LIME), Gradient-Based Saliency Maps, and Gradient-Weighted Class Activation Mapping (Grad-CAM) to GNNs.
The experimental results indicate that the LIME-based approach is the most efficient explainability method for multiple tasks in the real-world situation, outperforming even the state-of-the
arXiv Detail & Related papers (2021-11-01T06:27:29Z) - Quantitative Evaluation of Explainable Graph Neural Networks for
Molecular Property Prediction [2.8544822698499255]
Graph neural networks (GNNs) remain of limited acceptance in drug discovery due to their lack of interpretability.
In this work, we build three levels of benchmark datasets to quantitatively assess the interpretability of the state-of-the-art GNN models.
We implement recent XAI methods in combination with different GNN algorithms to highlight the benefits, limitations, and future opportunities for drug discovery.
arXiv Detail & Related papers (2021-07-01T04:49:29Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z) - Efficient Probabilistic Logic Reasoning with Graph Neural Networks [63.099999467118245]
Markov Logic Networks (MLNs) can be used to address many knowledge graph problems.
Inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult.
We propose a graph neural network (GNN) variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model.
arXiv Detail & Related papers (2020-01-29T23:34:36Z) - Node Masking: Making Graph Neural Networks Generalize and Scale Better [71.51292866945471]
Graph Neural Networks (GNNs) have received a lot of interest in the recent times.
In this paper, we utilize some theoretical tools to better visualize the operations performed by state of the art spatial GNNs.
We introduce a simple concept, Node Masking, that allows them to generalize and scale better.
arXiv Detail & Related papers (2020-01-17T06:26:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.