A Hybrid Model for Learning Embeddings and Logical Rules Simultaneously
from Knowledge Graphs
- URL: http://arxiv.org/abs/2009.10800v1
- Date: Tue, 22 Sep 2020 20:29:27 GMT
- Title: A Hybrid Model for Learning Embeddings and Logical Rules Simultaneously
from Knowledge Graphs
- Authors: Susheel Suresh and Jennifer Neville
- Abstract summary: We develop a hybrid model that learns both high-quality rules and embeddings simultaneously.
Our method uses a cross feedback paradigm wherein, an embedding model is used to guide the search of a rule mining system to mine rules and infer new facts.
- Score: 20.438750956142638
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The problem of knowledge graph (KG) reasoning has been widely explored by
traditional rule-based systems and more recently by knowledge graph embedding
methods. While logical rules can capture deterministic behavior in a KG they
are brittle and mining ones that infer facts beyond the known KG is
challenging. Probabilistic embedding methods are effective in capturing global
soft statistical tendencies and reasoning with them is computationally
efficient. While embedding representations learned from rich training data are
expressive, incompleteness and sparsity in real-world KGs can impact their
effectiveness. We aim to leverage the complementary properties of both methods
to develop a hybrid model that learns both high-quality rules and embeddings
simultaneously. Our method uses a cross feedback paradigm wherein, an embedding
model is used to guide the search of a rule mining system to mine rules and
infer new facts. These new facts are sampled and further used to refine the
embedding model. Experiments on multiple benchmark datasets show the
effectiveness of our method over other competitive standalone and hybrid
baselines. We also show its efficacy in a sparse KG setting and finally explore
the connection with negative sampling.
Related papers
- Neural Probabilistic Logic Learning for Knowledge Graph Reasoning [10.473897846826956]
This paper aims to design a reasoning framework that achieves accurate reasoning on knowledge graphs.
We introduce a scoring module that effectively enhances the expressive power of embedding networks.
We improve the interpretability of the model by incorporating a Markov Logic Network based on variational inference.
arXiv Detail & Related papers (2024-07-04T07:45:46Z) - Joint Learning of Label and Environment Causal Independence for Graph
Out-of-Distribution Generalization [60.4169201192582]
We propose to incorporate label and environment causal independence (LECI) to fully make use of label and environment information.
LECI significantly outperforms prior methods on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-06-01T19:33:30Z) - RulE: Knowledge Graph Reasoning with Rule Embedding [69.31451649090661]
We propose a principled framework called textbfRulE (stands for Rule Embedding) to leverage logical rules to enhance KG reasoning.
RulE learns rule embeddings from existing triplets and first-order rules by jointly representing textbfentities, textbfrelations and textbflogical rules in a unified embedding space.
Results on multiple benchmarks reveal that our model outperforms the majority of existing embedding-based and rule-based approaches.
arXiv Detail & Related papers (2022-10-24T06:47:13Z) - DegreEmbed: incorporating entity embedding into logic rule learning for
knowledge graph reasoning [7.066269573204757]
Link prediction for knowledge graphs is the task aiming to complete missing facts by reasoning based on the existing knowledge.
We propose DegreEmbed, a model that combines embedding-based learning and logic rule mining for inferring on KGs.
arXiv Detail & Related papers (2021-12-18T13:38:48Z) - MPLR: a novel model for multi-target learning of logical rules for
knowledge graph reasoning [5.499688003232003]
We study the problem of learning logic rules for reasoning on knowledge graphs for completing missing factual triplets.
We propose a model called MPLR that improves the existing models to fully use training data and multi-target scenarios are considered.
Experimental results empirically demonstrate that our MPLR model outperforms state-of-the-art methods on five benchmark datasets.
arXiv Detail & Related papers (2021-12-12T09:16:00Z) - EngineKGI: Closed-Loop Knowledge Graph Inference [37.15381932994768]
EngineKGI is a novel closed-loop KG inference framework.
It combines KGE and rule learning to complement each other in a closed-loop pattern.
Our model outperforms other baselines on link prediction tasks.
arXiv Detail & Related papers (2021-12-02T08:02:59Z) - What is Learned in Knowledge Graph Embeddings? [3.224929252256631]
A knowledge graph (KG) is a data structure which represents entities and relations as the vertices and edges of a directed graph with edge types.
We investigate whether learning rules between relations is indeed what drives the performance of embedding-based methods.
Using experiments on synthetic KGs, we show that KG models can learn motifs and how this ability is degraded by non-motif edges.
arXiv Detail & Related papers (2021-10-19T13:52:11Z) - Low-Regret Active learning [64.36270166907788]
We develop an online learning algorithm for identifying unlabeled data points that are most informative for training.
At the core of our work is an efficient algorithm for sleeping experts that is tailored to achieve low regret on predictable (easy) instances.
arXiv Detail & Related papers (2021-04-06T22:53:45Z) - Probabilistic Case-based Reasoning for Open-World Knowledge Graph
Completion [59.549664231655726]
A case-based reasoning (CBR) system solves a new problem by retrieving cases' that are similar to the given problem.
In this paper, we demonstrate that such a system is achievable for reasoning in knowledge-bases (KBs)
Our approach predicts attributes for an entity by gathering reasoning paths from similar entities in the KB.
arXiv Detail & Related papers (2020-10-07T17:48:12Z) - A Heterogeneous Graph with Factual, Temporal and Logical Knowledge for
Question Answering Over Dynamic Contexts [81.4757750425247]
We study question answering over a dynamic textual environment.
We develop a graph neural network over the constructed graph, and train the model in an end-to-end manner.
arXiv Detail & Related papers (2020-04-25T04:53:54Z) - Generative Adversarial Zero-Shot Relational Learning for Knowledge
Graphs [96.73259297063619]
We consider a novel formulation, zero-shot learning, to free this cumbersome curation.
For newly-added relations, we attempt to learn their semantic features from their text descriptions.
We leverage Generative Adrial Networks (GANs) to establish the connection between text and knowledge graph domain.
arXiv Detail & Related papers (2020-01-08T01:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.