Logical Entity Representation in Knowledge-Graphs for Differentiable
Rule Learning
- URL: http://arxiv.org/abs/2305.12738v1
- Date: Mon, 22 May 2023 05:59:22 GMT
- Title: Logical Entity Representation in Knowledge-Graphs for Differentiable
Rule Learning
- Authors: Chi Han, Qizheng He, Charles Yu, Xinya Du, Hanghang Tong, Heng Ji
- Abstract summary: We propose Logical Entity RePresentation (LERP) to encode contextual information of entities in the knowledge graph.
A LERP is designed as a vector of probabilistic logical functions on the entity's neighboring sub-graph.
Our model outperforms other rule learning methods in knowledge graph completion and is comparable or even superior to state-of-the-art black-box methods.
- Score: 71.05093203007357
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Probabilistic logical rule learning has shown great strength in logical rule
mining and knowledge graph completion. It learns logical rules to predict
missing edges by reasoning on existing edges in the knowledge graph. However,
previous efforts have largely been limited to only modeling chain-like Horn
clauses such as $R_1(x,z)\land R_2(z,y)\Rightarrow H(x,y)$. This formulation
overlooks additional contextual information from neighboring sub-graphs of
entity variables $x$, $y$ and $z$. Intuitively, there is a large gap here, as
local sub-graphs have been found to provide important information for knowledge
graph completion. Inspired by these observations, we propose Logical Entity
RePresentation (LERP) to encode contextual information of entities in the
knowledge graph. A LERP is designed as a vector of probabilistic logical
functions on the entity's neighboring sub-graph. It is an interpretable
representation while allowing for differentiable optimization. We can then
incorporate LERP into probabilistic logical rule learning to learn more
expressive rules. Empirical results demonstrate that with LERP, our model
outperforms other rule learning methods in knowledge graph completion and is
comparable or even superior to state-of-the-art black-box methods. Moreover, we
find that our model can discover a more expressive family of logical rules.
LERP can also be further combined with embedding learning methods like TransE
to make it more interpretable.
Related papers
- Improving Complex Reasoning over Knowledge Graph with Logic-Aware Curriculum Tuning [89.89857766491475]
We propose a complex reasoning schema over KG upon large language models (LLMs)
We augment the arbitrary first-order logical queries via binary tree decomposition to stimulate the reasoning capability of LLMs.
Experiments across widely used datasets demonstrate that LACT has substantial improvements(brings an average +5.5% MRR score) over advanced methods.
arXiv Detail & Related papers (2024-05-02T18:12:08Z) - ChatRule: Mining Logical Rules with Large Language Models for Knowledge
Graph Reasoning [107.61997887260056]
We propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs.
Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs.
To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs.
arXiv Detail & Related papers (2023-09-04T11:38:02Z) - Machine Learning with Probabilistic Law Discovery: A Concise
Introduction [77.34726150561087]
Probabilistic Law Discovery (PLD) is a logic based Machine Learning method, which implements a variant of probabilistic rule learning.
PLD is close to Decision Tree/Random Forest methods, but it differs significantly in how relevant rules are defined.
This paper outlines the main principles of PLD, highlight its benefits and limitations and provide some application guidelines.
arXiv Detail & Related papers (2022-12-22T17:40:13Z) - RulE: Knowledge Graph Reasoning with Rule Embedding [69.31451649090661]
We propose a principled framework called textbfRulE (stands for Rule Embedding) to leverage logical rules to enhance KG reasoning.
RulE learns rule embeddings from existing triplets and first-order rules by jointly representing textbfentities, textbfrelations and textbflogical rules in a unified embedding space.
Results on multiple benchmarks reveal that our model outperforms the majority of existing embedding-based and rule-based approaches.
arXiv Detail & Related papers (2022-10-24T06:47:13Z) - DegreEmbed: incorporating entity embedding into logic rule learning for
knowledge graph reasoning [7.066269573204757]
Link prediction for knowledge graphs is the task aiming to complete missing facts by reasoning based on the existing knowledge.
We propose DegreEmbed, a model that combines embedding-based learning and logic rule mining for inferring on KGs.
arXiv Detail & Related papers (2021-12-18T13:38:48Z) - RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs [91.71504177786792]
This paper studies learning logic rules for reasoning on knowledge graphs.
Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks.
Existing methods either suffer from the problem of searching in a large search space or ineffective optimization due to sparse rewards.
arXiv Detail & Related papers (2020-10-08T14:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.