LPRules: Rule Induction in Knowledge Graphs Using Linear Programming
- URL: http://arxiv.org/abs/2110.08245v1
- Date: Fri, 15 Oct 2021 17:58:16 GMT
- Title: LPRules: Rule Induction in Knowledge Graphs Using Linear Programming
- Authors: Sanjeeb Dash and Joao Goncalves
- Abstract summary: Rule-based methods learn first-order logic rules that capture existing facts in an input graph and then use these rules for reasoning about missing facts.
A major drawback of such methods is the lack of scalability to large datasets.
We present a simple linear programming (LP) model to choose rules from a list of candidate rules and assign weights to them.
- Score: 4.94950858749529
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graph (KG) completion is a well-studied problem in AI. Rule-based
methods and embedding-based methods form two of the solution techniques.
Rule-based methods learn first-order logic rules that capture existing facts in
an input graph and then use these rules for reasoning about missing facts. A
major drawback of such methods is the lack of scalability to large datasets. In
this paper, we present a simple linear programming (LP) model to choose rules
from a list of candidate rules and assign weights to them. For smaller KGs, we
use simple heuristics to create the candidate list. For larger KGs, we start
with a small initial candidate list, and then use standard column generation
ideas to add more rules in order to improve the LP model objective value. To
foster interpretability and generalizability, we limit the complexity of the
set of chosen rules via explicit constraints, and tune the complexity
hyperparameter for individual datasets. We show that our method can obtain
state-of-the-art results for three out of four widely used KG datasets, while
taking significantly less computing time than other popular rule learners
including some based on neuro-symbolic methods. The improved scalability of our
method allows us to tackle large datasets such as YAGO3-10.
Related papers
- Learning Rules from KGs Guided by Language Models [48.858741745144044]
Rule learning methods can be applied to predict potentially missing facts.
Ranking of rules is especially challenging over highly incomplete or biased KGs.
With the recent rise of Language Models (LMs) several works have claimed that LMs can be used as alternative means for KG completion.
arXiv Detail & Related papers (2024-09-12T09:27:36Z) - Scalable Rule Lists Learning with Sampling [9.681286056736292]
We present a novel approach to learn nearly optimal rule lists from large datasets.
Our algorithm uses sampling to efficiently obtain an approximation of the optimal rule list.
Our algorithm identifies nearly optimal rule lists with a speed-up up to two orders of magnitude over state-of-the-art exact approaches.
arXiv Detail & Related papers (2024-06-18T17:15:00Z) - ChatRule: Mining Logical Rules with Large Language Models for Knowledge
Graph Reasoning [107.61997887260056]
We propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs.
Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs.
To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs.
arXiv Detail & Related papers (2023-09-04T11:38:02Z) - Logical Entity Representation in Knowledge-Graphs for Differentiable
Rule Learning [71.05093203007357]
We propose Logical Entity RePresentation (LERP) to encode contextual information of entities in the knowledge graph.
A LERP is designed as a vector of probabilistic logical functions on the entity's neighboring sub-graph.
Our model outperforms other rule learning methods in knowledge graph completion and is comparable or even superior to state-of-the-art black-box methods.
arXiv Detail & Related papers (2023-05-22T05:59:22Z) - Knowledge Reasoning via Jointly Modeling Knowledge Graphs and Soft Rules [17.301284626706856]
Methods of knowledge graph completion (KGC) can be classified into two major categories: rule-based reasoning and embedding-based reasoning.
We propose a novel method that injects rules and learns representations iteratively to take full advantage of rules and embeddings.
arXiv Detail & Related papers (2023-01-07T05:24:29Z) - LNN-EL: A Neuro-Symbolic Approach to Short-text Entity Linking [62.634516517844496]
We propose LNN-EL, a neuro-symbolic approach that combines the advantages of using interpretable rules with the performance of neural learning.
Even though constrained to using rules, LNN-EL performs competitively against SotA black-box neural approaches.
arXiv Detail & Related papers (2021-06-17T20:22:45Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z) - Building Rule Hierarchies for Efficient Logical Rule Learning from
Knowledge Graphs [20.251630903853016]
We propose new methods for pruning unpromising rules using rule hierarchies.
We show that the application of HPMs is effective in removing unpromising rules.
arXiv Detail & Related papers (2020-06-29T16:33:30Z) - An Integer Linear Programming Framework for Mining Constraints from Data [81.60135973848125]
We present a general framework for mining constraints from data.
In particular, we consider the inference in structured output prediction as an integer linear programming (ILP) problem.
We show that our approach can learn to solve 9x9 Sudoku puzzles and minimal spanning tree problems from examples without providing the underlying rules.
arXiv Detail & Related papers (2020-06-18T20:09:53Z) - Towards Learning Instantiated Logical Rules from Knowledge Graphs [20.251630903853016]
We present GPFL, a probabilistic learner rule optimized to mine instantiated first-order logic rules from knowledge graphs.
GPFL utilizes a novel two-stage rule generation mechanism that first generalizes extracted paths into templates that are acyclic abstract rules.
We reveal the presence of overfitting rules, their impact on the predictive performance, and the effectiveness of a simple validation method filtering out overfitting rules.
arXiv Detail & Related papers (2020-03-13T00:32:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.