Knowledge Reasoning via Jointly Modeling Knowledge Graphs and Soft Rules
- URL: http://arxiv.org/abs/2301.02781v1
- Date: Sat, 7 Jan 2023 05:24:29 GMT
- Title: Knowledge Reasoning via Jointly Modeling Knowledge Graphs and Soft Rules
- Authors: Yinyu Lan, Shizhu He, Kang Liu, Jun Zhao
- Abstract summary: Methods of knowledge graph completion (KGC) can be classified into two major categories: rule-based reasoning and embedding-based reasoning.
We propose a novel method that injects rules and learns representations iteratively to take full advantage of rules and embeddings.
- Score: 17.301284626706856
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graphs (KGs) play a crucial role in many applications, such as
question answering, but incompleteness is an urgent issue for their broad
application. Much research in knowledge graph completion (KGC) has been
performed to resolve this issue. The methods of KGC can be classified into two
major categories: rule-based reasoning and embedding-based reasoning. The
former has high accuracy and good interpretability, but a major challenge is to
obtain effective rules on large-scale KGs. The latter has good efficiency and
scalability, but it relies heavily on data richness and cannot fully use domain
knowledge in the form of logical rules. We propose a novel method that injects
rules and learns representations iteratively to take full advantage of rules
and embeddings. Specifically, we model the conclusions of rule groundings as
0-1 variables and use a rule confidence regularizer to remove the uncertainty
of the conclusions. The proposed approach has the following advantages: 1) It
combines the benefits of both rules and knowledge graph embeddings (KGEs) and
achieves a good balance between efficiency and scalability. 2) It uses an
iterative method to continuously improve KGEs and remove incorrect rule
conclusions. Evaluations on two public datasets show that our method
outperforms the current state-of-the-art methods, improving performance by
2.7\% and 4.3\% in mean reciprocal rank (MRR).
Related papers
- Learning Rules from KGs Guided by Language Models [48.858741745144044]
Rule learning methods can be applied to predict potentially missing facts.
Ranking of rules is especially challenging over highly incomplete or biased KGs.
With the recent rise of Language Models (LMs) several works have claimed that LMs can be used as alternative means for KG completion.
arXiv Detail & Related papers (2024-09-12T09:27:36Z) - Evaluating Human Alignment and Model Faithfulness of LLM Rationale [66.75309523854476]
We study how well large language models (LLMs) explain their generations through rationales.
We show that prompting-based methods are less "faithful" than attribution-based explanations.
arXiv Detail & Related papers (2024-06-28T20:06:30Z) - ChatRule: Mining Logical Rules with Large Language Models for Knowledge
Graph Reasoning [107.61997887260056]
We propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs.
Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs.
To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs.
arXiv Detail & Related papers (2023-09-04T11:38:02Z) - RulE: Knowledge Graph Reasoning with Rule Embedding [69.31451649090661]
We propose a principled framework called textbfRulE (stands for Rule Embedding) to leverage logical rules to enhance KG reasoning.
RulE learns rule embeddings from existing triplets and first-order rules by jointly representing textbfentities, textbfrelations and textbflogical rules in a unified embedding space.
Results on multiple benchmarks reveal that our model outperforms the majority of existing embedding-based and rule-based approaches.
arXiv Detail & Related papers (2022-10-24T06:47:13Z) - Explainable Sparse Knowledge Graph Completion via High-order Graph
Reasoning Network [111.67744771462873]
This paper proposes a novel explainable model for sparse Knowledge Graphs (KGs)
It combines high-order reasoning into a graph convolutional network, namely HoGRN.
It can not only improve the generalization ability to mitigate the information insufficiency issue but also provide interpretability.
arXiv Detail & Related papers (2022-07-14T10:16:56Z) - Bayes Point Rule Set Learning [5.065947993017157]
Interpretability is having an increasingly important role in the design of machine learning algorithms.
Disjunctive Normal Forms are arguably the most interpretable way to express a set of rules.
We propose an effective bottom-up extension of the popular FIND-S algorithm to learn DNF-type rulesets.
arXiv Detail & Related papers (2022-04-11T16:50:41Z) - Combining Rules and Embeddings via Neuro-Symbolic AI for Knowledge Base
Completion [59.093293389123424]
We show that not all rule-based Knowledge Base Completion models are the same.
We propose two distinct approaches that learn in one case: 1) a mixture of relations and the other 2) a mixture of paths.
When implemented on top of neuro-symbolic AI, which learns rules by extending Boolean logic to real-valued logic, the latter model leads to superior KBC accuracy outperforming state-of-the-art rule-based KBC by 2-10% in terms of mean reciprocal rank.
arXiv Detail & Related papers (2021-09-16T17:54:56Z) - A Hybrid Model for Learning Embeddings and Logical Rules Simultaneously
from Knowledge Graphs [20.438750956142638]
We develop a hybrid model that learns both high-quality rules and embeddings simultaneously.
Our method uses a cross feedback paradigm wherein, an embedding model is used to guide the search of a rule mining system to mine rules and infer new facts.
arXiv Detail & Related papers (2020-09-22T20:29:27Z) - Building Rule Hierarchies for Efficient Logical Rule Learning from
Knowledge Graphs [20.251630903853016]
We propose new methods for pruning unpromising rules using rule hierarchies.
We show that the application of HPMs is effective in removing unpromising rules.
arXiv Detail & Related papers (2020-06-29T16:33:30Z) - Towards Learning Instantiated Logical Rules from Knowledge Graphs [20.251630903853016]
We present GPFL, a probabilistic learner rule optimized to mine instantiated first-order logic rules from knowledge graphs.
GPFL utilizes a novel two-stage rule generation mechanism that first generalizes extracted paths into templates that are acyclic abstract rules.
We reveal the presence of overfitting rules, their impact on the predictive performance, and the effectiveness of a simple validation method filtering out overfitting rules.
arXiv Detail & Related papers (2020-03-13T00:32:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.