Systematic Relational Reasoning With Epistemic Graph Neural Networks
- URL: http://arxiv.org/abs/2407.17396v2
- Date: Thu, 27 Feb 2025 22:50:41 GMT
- Title: Systematic Relational Reasoning With Epistemic Graph Neural Networks
- Authors: Irtaza Khalid, Steven Schockaert,
- Abstract summary: We propose the Epistemic GNN (EpiGNN), a novel parameter-efficient and scalable GNN architecture with an inductive bias for systematic reasoning.<n>We show that EpiGNNs achieve state-of-the-art results on link prediction tasks that require systematic reasoning.<n>For inductive knowledge graph completion, EpiGNNs rival the performance of state-of-the-art specialized approaches.
- Score: 17.49288661342947
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Developing models that can learn to reason is a notoriously challenging problem. We focus on reasoning in relational domains, where the use of Graph Neural Networks (GNNs) seems like a natural choice. However, previous work has shown that regular GNNs lack the ability to systematically generalize from training examples on test graphs requiring longer inference chains, which fundamentally limits their reasoning abilities. A common solution relies on neuro-symbolic methods that systematically reason by learning rules, but their scalability is often limited and they tend to make unrealistically strong assumptions, e.g.\ that the answer can always be inferred from a single relational path. We propose the Epistemic GNN (EpiGNN), a novel parameter-efficient and scalable GNN architecture with an epistemic inductive bias for systematic reasoning. Node embeddings in EpiGNNs are treated as epistemic states, and message passing is implemented accordingly. We show that EpiGNNs achieve state-of-the-art results on link prediction tasks that require systematic reasoning. Furthermore, for inductive knowledge graph completion, EpiGNNs rival the performance of state-of-the-art specialized approaches. Finally, we introduce two new benchmarks that go beyond standard relational reasoning by requiring the aggregation of information from multiple paths. Here, existing neuro-symbolic approaches fail, yet EpiGNNs learn to reason accurately. Code and datasets are available at https://github.com/erg0dic/gnn-sg.
Related papers
- Extracting Interpretable Logic Rules from Graph Neural Networks [7.262955921646326]
Graph neural networks (GNNs) operate over both input feature spaces and graph structures.
We propose a novel framework, LOGI CXGNN, for extracting interpretable logic rules from GNNs.
LOGI CXGNN is model-agnostic, efficient, and data-driven, eliminating the need for predefined concepts.
arXiv Detail & Related papers (2025-03-25T09:09:46Z) - Generalization of Graph Neural Networks is Robust to Model Mismatch [84.01980526069075]
Graph neural networks (GNNs) have demonstrated their effectiveness in various tasks supported by their generalization capabilities.
In this paper, we examine GNNs that operate on geometric graphs generated from manifold models.
Our analysis reveals the robustness of the GNN generalization in the presence of such model mismatch.
arXiv Detail & Related papers (2024-08-25T16:00:44Z) - A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
We take a manifold perspective to establish the statistical generalization theory of GNNs on graphs sampled from a manifold in the spectral domain.
We prove that the generalization bounds of GNNs decrease linearly with the size of the graphs in the logarithmic scale, and increase linearly with the spectral continuity constants of the filter functions.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - Temporal Spiking Neural Networks with Synaptic Delay for Graph Reasoning [91.29876772547348]
Spiking neural networks (SNNs) are investigated as biologically inspired models of neural computation.
This paper reveals that SNNs, when amalgamated with synaptic delay and temporal coding, are proficient in executing (knowledge) graph reasoning.
arXiv Detail & Related papers (2024-05-27T05:53:30Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Neural-Symbolic Recommendation with Graph-Enhanced Information [7.841447116972524]
We build a neuro-symbolic recommendation model with both global implicit reasoning ability and local explicit logic reasoning ability.
We transform user behavior into propositional logic expressions to achieve recommendations from the perspective of cognitive reasoning.
arXiv Detail & Related papers (2023-07-11T06:29:31Z) - CI-GNN: A Granger Causality-Inspired Graph Neural Network for
Interpretable Brain Network-Based Psychiatric Diagnosis [40.26902764049346]
We propose a granger causality-inspired graph neural network (CI-GNN) to explain brain-network based psychiatric diagnosis.
CI-GNN learns disentangled subgraph-level representations alpha and beta that encode, respectively, the causal and noncausal aspects of original graph.
We empirically evaluate the performance of CI-GNN against three baseline GNNs and four state-of-the-art GNN explainers on synthetic data and three large-scale brain disease datasets.
arXiv Detail & Related papers (2023-01-04T14:36:44Z) - GNNInterpreter: A Probabilistic Generative Model-Level Explanation for
Graph Neural Networks [25.94529851210956]
We propose a model-agnostic model-level explanation method for different Graph Neural Networks (GNNs) that follow the message passing scheme, GNNInterpreter.
GNNInterpreter learns a probabilistic generative graph distribution that produces the most discriminative graph pattern the GNN tries to detect.
Compared to existing works, GNNInterpreter is more flexible and computationally efficient in generating explanation graphs with different types of node and edge features.
arXiv Detail & Related papers (2022-09-15T07:45:35Z) - ProtGNN: Towards Self-Explaining Graph Neural Networks [12.789013658551454]
We propose Prototype Graph Neural Network (ProtGNN), which combines prototype learning with GNNs.
ProtGNN and ProtGNN+ can provide inherent interpretability while achieving accuracy on par with the non-interpretable counterparts.
arXiv Detail & Related papers (2021-12-02T01:16:29Z) - Generalizing Graph Neural Networks on Out-Of-Distribution Graphs [51.33152272781324]
Graph Neural Networks (GNNs) are proposed without considering the distribution shifts between training and testing graphs.
In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation.
We propose a general causal representation framework, called StableGNN, to eliminate the impact of spurious correlations.
arXiv Detail & Related papers (2021-11-20T18:57:18Z) - EDITS: Modeling and Mitigating Data Bias for Graph Neural Networks [29.974829042502375]
We develop a framework named EDITS to mitigate the bias in attributed networks.
EDITS works in a model-agnostic manner, which means that it is independent of the specific GNNs applied for downstream tasks.
arXiv Detail & Related papers (2021-08-11T14:07:01Z) - Optimization of Graph Neural Networks: Implicit Acceleration by Skip
Connections and More Depth [57.10183643449905]
Graph Neural Networks (GNNs) have been studied from the lens of expressive power and generalization.
We study the dynamics of GNNs by studying deep skip optimization.
Our results provide first theoretical support for the success of GNNs.
arXiv Detail & Related papers (2021-05-10T17:59:01Z) - The Surprising Power of Graph Neural Networks with Random Node
Initialization [54.4101931234922]
Graph neural networks (GNNs) are effective models for representation learning on relational data.
Standard GNNs are limited in their expressive power, as they cannot distinguish beyond the capability of the Weisfeiler-Leman graph isomorphism.
In this work, we analyze the expressive power of GNNs with random node (RNI)
We prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.
arXiv Detail & Related papers (2020-10-02T19:53:05Z) - Implicit Graph Neural Networks [46.0589136729616]
We propose a graph learning framework called Implicit Graph Neural Networks (IGNN)
IGNNs consistently capture long-range dependencies and outperform state-of-the-art GNN models.
arXiv Detail & Related papers (2020-09-14T06:04:55Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z) - Efficient Probabilistic Logic Reasoning with Graph Neural Networks [63.099999467118245]
Markov Logic Networks (MLNs) can be used to address many knowledge graph problems.
Inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult.
We propose a graph neural network (GNN) variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model.
arXiv Detail & Related papers (2020-01-29T23:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.