RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs
- URL: http://arxiv.org/abs/2010.04029v2
- Date: Fri, 16 Jul 2021 02:52:53 GMT
- Title: RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs
- Authors: Meng Qu, Junkun Chen, Louis-Pascal Xhonneux, Yoshua Bengio, Jian Tang
- Abstract summary: This paper studies learning logic rules for reasoning on knowledge graphs.
Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks.
Existing methods either suffer from the problem of searching in a large search space or ineffective optimization due to sparse rewards.
- Score: 91.71504177786792
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies learning logic rules for reasoning on knowledge graphs.
Logic rules provide interpretable explanations when used for prediction as well
as being able to generalize to other tasks, and hence are critical to learn.
Existing methods either suffer from the problem of searching in a large search
space (e.g., neural logic programming) or ineffective optimization due to
sparse rewards (e.g., techniques based on reinforcement learning). To address
these limitations, this paper proposes a probabilistic model called RNNLogic.
RNNLogic treats logic rules as a latent variable, and simultaneously trains a
rule generator as well as a reasoning predictor with logic rules. We develop an
EM-based algorithm for optimization. In each iteration, the reasoning predictor
is first updated to explore some generated logic rules for reasoning. Then in
the E-step, we select a set of high-quality rules from all generated rules with
both the rule generator and reasoning predictor via posterior inference; and in
the M-step, the rule generator is updated with the rules selected in the
E-step. Experiments on four datasets prove the effectiveness of RNNLogic.
Related papers
- Efficiently Learning Probabilistic Logical Models by Cheaply Ranking Mined Rules [9.303501974597548]
We introduce precision and recall for logical rules and define their composition as rule utility.
We introduce SPECTRUM, a scalable framework for learning logical models from relational data.
arXiv Detail & Related papers (2024-09-24T16:54:12Z) - Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs [87.34281749422756]
Large language models (LLMs) have achieved impressive human-like performance across various reasoning tasks.
However, their mastery of underlying inferential rules still falls short of human capabilities.
We propose a logic scaffolding inferential rule generation framework, to construct an inferential rule base, ULogic.
arXiv Detail & Related papers (2024-02-18T03:38:51Z) - ChatRule: Mining Logical Rules with Large Language Models for Knowledge
Graph Reasoning [107.61997887260056]
We propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs.
Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs.
To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs.
arXiv Detail & Related papers (2023-09-04T11:38:02Z) - Logical Entity Representation in Knowledge-Graphs for Differentiable
Rule Learning [71.05093203007357]
We propose Logical Entity RePresentation (LERP) to encode contextual information of entities in the knowledge graph.
A LERP is designed as a vector of probabilistic logical functions on the entity's neighboring sub-graph.
Our model outperforms other rule learning methods in knowledge graph completion and is comparable or even superior to state-of-the-art black-box methods.
arXiv Detail & Related papers (2023-05-22T05:59:22Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Learning Logic Rules for Document-level Relation Extraction [41.442030707813636]
We propose LogiRE, a novel probabilistic model for document-level relation extraction by learning logic rules.
LogiRE treats logic rules as latent variables and consists of two modules: a rule generator and a relation extractor.
By introducing logic rules into neural networks, LogiRE can explicitly capture long-range dependencies as well as enjoy better interpretation.
arXiv Detail & Related papers (2021-11-09T20:32:30Z) - Evaluating Logical Generalization in Graph Neural Networks [59.70452462833374]
We study the task of logical generalization using graph neural networks (GNNs)
Our benchmark suite, GraphLog, requires that learning algorithms perform rule induction in different synthetic logics.
We find that the ability for models to generalize and adapt is strongly determined by the diversity of the logical rules they encounter during training.
arXiv Detail & Related papers (2020-03-14T05:45:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.