Efficient Probabilistic Logic Reasoning with Graph Neural Networks
- URL: http://arxiv.org/abs/2001.11850v2
- Date: Tue, 4 Feb 2020 01:10:16 GMT
- Title: Efficient Probabilistic Logic Reasoning with Graph Neural Networks
- Authors: Yuyu Zhang, Xinshi Chen, Yuan Yang, Arun Ramamurthy, Bo Li, Yuan Qi,
Le Song
- Abstract summary: Markov Logic Networks (MLNs) can be used to address many knowledge graph problems.
Inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult.
We propose a graph neural network (GNN) variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model.
- Score: 63.099999467118245
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Markov Logic Networks (MLNs), which elegantly combine logic rules and
probabilistic graphical models, can be used to address many knowledge graph
problems. However, inference in MLN is computationally intensive, making the
industrial-scale application of MLN very difficult. In recent years, graph
neural networks (GNNs) have emerged as efficient and effective tools for
large-scale graph problems. Nevertheless, GNNs do not explicitly incorporate
prior logic rules into the models, and may require many labeled examples for a
target task. In this paper, we explore the combination of MLNs and GNNs, and
use graph neural networks for variational inference in MLN. We propose a GNN
variant, named ExpressGNN, which strikes a nice balance between the
representation power and the simplicity of the model. Our extensive experiments
on several benchmark datasets demonstrate that ExpressGNN leads to effective
and efficient probabilistic logic reasoning.
Related papers
- A Logic for Reasoning About Aggregate-Combine Graph Neural Networks [11.313331046805365]
We show that each formula can be transformed into an equivalent graph neural network (GNN)
We also show that the satisfiability problem is PSPACE-complete.
arXiv Detail & Related papers (2024-04-30T21:16:38Z) - LazyGNN: Large-Scale Graph Neural Networks via Lazy Propagation [51.552170474958736]
We propose to capture long-distance dependency in graphs by shallower models instead of deeper models, which leads to a much more efficient model, LazyGNN, for graph representation learning.
LazyGNN is compatible with existing scalable approaches (such as sampling methods) for further accelerations through the development of mini-batch LazyGNN.
Comprehensive experiments demonstrate its superior prediction performance and scalability on large-scale benchmarks.
arXiv Detail & Related papers (2023-02-03T02:33:07Z) - Understanding and Improving Deep Graph Neural Networks: A Probabilistic
Graphical Model Perspective [22.82625446308785]
We propose a novel view for understanding graph neural networks (GNNs)
In this work, we focus on deep GNNs and propose a novel view for understanding them.
We design a more powerful GNN: coupling graph neural network (CoGNet)
arXiv Detail & Related papers (2023-01-25T12:02:12Z) - GNNInterpreter: A Probabilistic Generative Model-Level Explanation for
Graph Neural Networks [25.94529851210956]
We propose a model-agnostic model-level explanation method for different Graph Neural Networks (GNNs) that follow the message passing scheme, GNNInterpreter.
GNNInterpreter learns a probabilistic generative graph distribution that produces the most discriminative graph pattern the GNN tries to detect.
Compared to existing works, GNNInterpreter is more flexible and computationally efficient in generating explanation graphs with different types of node and edge features.
arXiv Detail & Related papers (2022-09-15T07:45:35Z) - EIGNN: Efficient Infinite-Depth Graph Neural Networks [51.97361378423152]
Graph neural networks (GNNs) are widely used for modelling graph-structured data in numerous applications.
Motivated by this limitation, we propose a GNN model with infinite depth, which we call Efficient Infinite-Depth Graph Neural Networks (EIGNN)
We show that EIGNN has a better ability to capture long-range dependencies than recent baselines, and consistently achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-02-22T08:16:58Z) - Edge-Level Explanations for Graph Neural Networks by Extending
Explainability Methods for Convolutional Neural Networks [33.20913249848369]
Graph Neural Networks (GNNs) are deep learning models that take graph data as inputs, and they are applied to various tasks such as traffic prediction and molecular property prediction.
We extend explainability methods for CNNs, such as Local Interpretable Model-Agnostic Explanations (LIME), Gradient-Based Saliency Maps, and Gradient-Weighted Class Activation Mapping (Grad-CAM) to GNNs.
The experimental results indicate that the LIME-based approach is the most efficient explainability method for multiple tasks in the real-world situation, outperforming even the state-of-the
arXiv Detail & Related papers (2021-11-01T06:27:29Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - Stochastic Graph Neural Networks [123.39024384275054]
Graph neural networks (GNNs) model nonlinear representations in graph data with applications in distributed agent coordination, control, and planning.
Current GNN architectures assume ideal scenarios and ignore link fluctuations that occur due to environment, human factors, or external attacks.
In these situations, the GNN fails to address its distributed task if the topological randomness is not considered accordingly.
arXiv Detail & Related papers (2020-06-04T08:00:00Z) - Evaluating Logical Generalization in Graph Neural Networks [59.70452462833374]
We study the task of logical generalization using graph neural networks (GNNs)
Our benchmark suite, GraphLog, requires that learning algorithms perform rule induction in different synthetic logics.
We find that the ability for models to generalize and adapt is strongly determined by the diversity of the logical rules they encounter during training.
arXiv Detail & Related papers (2020-03-14T05:45:55Z) - Random Features Strengthen Graph Neural Networks [40.60905158071766]
Graph neural networks (GNNs) are powerful machine learning models for various graph learning tasks.
In this paper, we demonstrate that GNNs become powerful just by adding a random feature to each node.
We show that the addition of random features enables GNNs to solve various problems that normal GNNs, including the graph convolutional networks (GCNs) and graph isomorphism networks (GINs) cannot solve.
arXiv Detail & Related papers (2020-02-08T12:47:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.