Towards Learning Instantiated Logical Rules from Knowledge Graphs
- URL: http://arxiv.org/abs/2003.06071v2
- Date: Fri, 15 May 2020 11:11:19 GMT
- Title: Towards Learning Instantiated Logical Rules from Knowledge Graphs
- Authors: Yulong Gu, Yu Guan, Paolo Missier
- Abstract summary: We present GPFL, a probabilistic learner rule optimized to mine instantiated first-order logic rules from knowledge graphs.
GPFL utilizes a novel two-stage rule generation mechanism that first generalizes extracted paths into templates that are acyclic abstract rules.
We reveal the presence of overfitting rules, their impact on the predictive performance, and the effectiveness of a simple validation method filtering out overfitting rules.
- Score: 20.251630903853016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efficiently inducing high-level interpretable regularities from knowledge
graphs (KGs) is an essential yet challenging task that benefits many downstream
applications. In this work, we present GPFL, a probabilistic rule learner
optimized to mine instantiated first-order logic rules from KGs. Instantiated
rules contain constants extracted from KGs. Compared to abstract rules that
contain no constants, instantiated rules are capable of explaining and
expressing concepts in more details. GPFL utilizes a novel two-stage rule
generation mechanism that first generalizes extracted paths into templates that
are acyclic abstract rules until a certain degree of template saturation is
achieved, then specializes the generated templates into instantiated rules.
Unlike existing works that ground every mined instantiated rule for evaluation,
GPFL shares groundings between structurally similar rules for collective
evaluation. Moreover, we reveal the presence of overfitting rules, their impact
on the predictive performance, and the effectiveness of a simple validation
method filtering out overfitting rules. Through extensive experiments on public
benchmark datasets, we show that GPFL 1.) significantly reduces the runtime on
evaluating instantiated rules; 2.) discovers much more quality instantiated
rules than existing works; 3.) improves the predictive performance of learned
rules by removing overfitting rules via validation; 4.) is competitive on
knowledge graph completion task compared to state-of-the-art baselines.
Related papers
- Learning Rules from KGs Guided by Language Models [48.858741745144044]
Rule learning methods can be applied to predict potentially missing facts.
Ranking of rules is especially challenging over highly incomplete or biased KGs.
With the recent rise of Language Models (LMs) several works have claimed that LMs can be used as alternative means for KG completion.
arXiv Detail & Related papers (2024-09-12T09:27:36Z) - Symbolic Working Memory Enhances Language Models for Complex Rule Application [87.34281749422756]
Large Language Models (LLMs) have shown remarkable reasoning performance but struggle with multi-step deductive reasoning.
We propose augmenting LLMs with external working memory and introduce a neurosymbolic framework for rule application.
Our framework iteratively performs symbolic rule grounding and LLM-based rule implementation.
arXiv Detail & Related papers (2024-08-24T19:11:54Z) - Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs [87.34281749422756]
Large language models (LLMs) have achieved impressive human-like performance across various reasoning tasks.
However, their mastery of underlying inferential rules still falls short of human capabilities.
We propose a logic scaffolding inferential rule generation framework, to construct an inferential rule base, ULogic.
arXiv Detail & Related papers (2024-02-18T03:38:51Z) - ChatRule: Mining Logical Rules with Large Language Models for Knowledge
Graph Reasoning [107.61997887260056]
We propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs.
Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs.
To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs.
arXiv Detail & Related papers (2023-09-04T11:38:02Z) - RulE: Knowledge Graph Reasoning with Rule Embedding [69.31451649090661]
We propose a principled framework called textbfRulE (stands for Rule Embedding) to leverage logical rules to enhance KG reasoning.
RulE learns rule embeddings from existing triplets and first-order rules by jointly representing textbfentities, textbfrelations and textbflogical rules in a unified embedding space.
Results on multiple benchmarks reveal that our model outperforms the majority of existing embedding-based and rule-based approaches.
arXiv Detail & Related papers (2022-10-24T06:47:13Z) - Distilling Task-specific Logical Rules from Large Pre-trained Models [24.66436804853525]
We develop a novel framework to distill task-specific logical rules from large pre-trained models.
Specifically, we borrow recent prompt-based language models as the knowledge expert to yield initial seed rules.
Experiments on three public named entity tagging benchmarks demonstrate the effectiveness of our proposed framework.
arXiv Detail & Related papers (2022-10-06T09:12:18Z) - Towards Target Sequential Rules [52.4562332499155]
We propose an efficient algorithm, called targeted sequential rule mining (TaSRM)
It is shown that the novel algorithm TaSRM and its variants can achieve better experimental performance compared to the existing baseline algorithm.
arXiv Detail & Related papers (2022-06-09T18:59:54Z) - Bayes Point Rule Set Learning [5.065947993017157]
Interpretability is having an increasingly important role in the design of machine learning algorithms.
Disjunctive Normal Forms are arguably the most interpretable way to express a set of rules.
We propose an effective bottom-up extension of the popular FIND-S algorithm to learn DNF-type rulesets.
arXiv Detail & Related papers (2022-04-11T16:50:41Z) - Theoretical Rule-based Knowledge Graph Reasoning by Connectivity
Dependency Discovery [2.945948598480997]
We present a theory for rule-based knowledge graph reasoning, based on which the connectivity dependencies in the graph are captured via multiple rule types.
Results show that our RuleDict model not only provides precise rules to interpret new triples, but also achieves state-of-the-art performances on one benchmark knowledge graph completion task.
arXiv Detail & Related papers (2020-11-12T03:00:20Z) - Building Rule Hierarchies for Efficient Logical Rule Learning from
Knowledge Graphs [20.251630903853016]
We propose new methods for pruning unpromising rules using rule hierarchies.
We show that the application of HPMs is effective in removing unpromising rules.
arXiv Detail & Related papers (2020-06-29T16:33:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.