Sound Logical Explanations for Mean Aggregation Graph Neural Networks
- URL: http://arxiv.org/abs/2511.11593v1
- Date: Mon, 27 Oct 2025 13:23:21 GMT
- Title: Sound Logical Explanations for Mean Aggregation Graph Neural Networks
- Authors: Matthew Morris, Ian Horrocks,
- Abstract summary: Graph neural networks (GNNs) are frequently used for knowledge graph completion.<n>We consider GNNs with mean aggregation and non-negative weights (MAGNNs)<n>Our experiments show that restricting mean-aggregation GNNs to have non-negative weights yields comparable or improved performance on standard inductive benchmarks.
- Score: 7.08300385454159
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) are frequently used for knowledge graph completion. Their black-box nature has motivated work that uses sound logical rules to explain predictions and characterise their expressivity. However, despite the prevalence of GNNs that use mean as an aggregation function, explainability and expressivity results are lacking for them. We consider GNNs with mean aggregation and non-negative weights (MAGNNs), proving the precise class of monotonic rules that can be sound for them, as well as providing a restricted fragment of first-order logic to explain any MAGNN prediction. Our experiments show that restricting mean-aggregation GNNs to have non-negative weights yields comparable or improved performance on standard inductive benchmarks, that sound rules are obtained in practice, that insightful explanations can be generated in practice, and that the sound rules can expose issues in the trained models.
Related papers
- Enhancing Logical Expressiveness in Graph Neural Networks via Path-Neighbor Aggregation [22.086161213961244]
We propose Path-Neighbor enhanced GNN (PN-GNN) to enhance the logical expressive power of GNN.<n>First, we analyze the logical expressive power of existing GNN-based methods and point out the shortcomings of these methods.<n>Then, we theoretically investigate the logical expressive power of PN-GNN, showing that it not only has strictly stronger expressive power than C-GNN but also that its $(k+1)$-hop logical expressiveness is strictly superior to that of $k$-hop.
arXiv Detail & Related papers (2025-11-11T08:59:10Z) - Logical Expressivity and Explanations for Monotonic GNNs with Scoring Functions [10.533348468499826]
Graph neural networks (GNNs) are often used for the task of link prediction.<n>We show how GNNs and scoring functions can be adapted to be monotonic.
arXiv Detail & Related papers (2025-08-14T15:56:48Z) - Extracting Interpretable Logic Rules from Graph Neural Networks [7.262955921646328]
Graph neural networks (GNNs) operate over both input feature spaces and graph structures.<n>We propose a novel framework, LOGI CXGNN, for extracting interpretable logic rules from GNNs.<n> LOGI CXGNN is model-agnostic, efficient, and data-driven, eliminating the need for predefined concepts.
arXiv Detail & Related papers (2025-03-25T09:09:46Z) - Relational Graph Convolutional Networks Do Not Learn Sound Rules [13.66949379381985]
Graph neural networks (GNNs) are frequently used to predict missing facts in knowledge graphs (KGs)
Recent work has aimed to explain their predictions using Datalog, a widely used logic-based formalism.
We consider one of the most popular GNN architectures for KGs, R-GCN, and we provide two methods to extract rules that explain its predictions and are sound.
arXiv Detail & Related papers (2024-08-14T15:46:42Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Understanding Expressivity of GNN in Rule Learning [36.04983130825589]
Rule learning is critical to improving knowledge graph (KG) reasoning.
GNNs with tail entity scoring are unified into a common framework.
We propose a novel labeling strategy to learn more rules in KG reasoning.
arXiv Detail & Related papers (2023-03-22T04:49:00Z) - Representation Power of Graph Neural Networks: Improved Expressivity via
Algebraic Analysis [124.97061497512804]
We show that standard Graph Neural Networks (GNNs) produce more discriminative representations than the Weisfeiler-Lehman (WL) algorithm.
We also show that simple convolutional architectures with white inputs, produce equivariant features that count the closed paths in the graph.
arXiv Detail & Related papers (2022-05-19T18:40:25Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - ProtGNN: Towards Self-Explaining Graph Neural Networks [12.789013658551454]
We propose Prototype Graph Neural Network (ProtGNN), which combines prototype learning with GNNs.
ProtGNN and ProtGNN+ can provide inherent interpretability while achieving accuracy on par with the non-interpretable counterparts.
arXiv Detail & Related papers (2021-12-02T01:16:29Z) - Generalizing Graph Neural Networks on Out-Of-Distribution Graphs [51.33152272781324]
Graph Neural Networks (GNNs) are proposed without considering the distribution shifts between training and testing graphs.
In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation.
We propose a general causal representation framework, called StableGNN, to eliminate the impact of spurious correlations.
arXiv Detail & Related papers (2021-11-20T18:57:18Z) - The Surprising Power of Graph Neural Networks with Random Node
Initialization [54.4101931234922]
Graph neural networks (GNNs) are effective models for representation learning on relational data.
Standard GNNs are limited in their expressive power, as they cannot distinguish beyond the capability of the Weisfeiler-Leman graph isomorphism.
In this work, we analyze the expressive power of GNNs with random node (RNI)
We prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.
arXiv Detail & Related papers (2020-10-02T19:53:05Z) - Efficient Probabilistic Logic Reasoning with Graph Neural Networks [63.099999467118245]
Markov Logic Networks (MLNs) can be used to address many knowledge graph problems.
Inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult.
We propose a graph neural network (GNN) variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model.
arXiv Detail & Related papers (2020-01-29T23:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.