MPLR: a novel model for multi-target learning of logical rules for
knowledge graph reasoning
- URL: http://arxiv.org/abs/2112.06189v1
- Date: Sun, 12 Dec 2021 09:16:00 GMT
- Title: MPLR: a novel model for multi-target learning of logical rules for
knowledge graph reasoning
- Authors: Yuliang Wei, Haotian Li, Guodong Xin, Yao Wang, Bailing Wang
- Abstract summary: We study the problem of learning logic rules for reasoning on knowledge graphs for completing missing factual triplets.
We propose a model called MPLR that improves the existing models to fully use training data and multi-target scenarios are considered.
Experimental results empirically demonstrate that our MPLR model outperforms state-of-the-art methods on five benchmark datasets.
- Score: 5.499688003232003
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large-scale knowledge graphs (KGs) provide structured representations of
human knowledge. However, as it is impossible to contain all knowledge, KGs are
usually incomplete. Reasoning based on existing facts paves a way to discover
missing facts. In this paper, we study the problem of learning logic rules for
reasoning on knowledge graphs for completing missing factual triplets. Learning
logic rules equips a model with strong interpretability as well as the ability
to generalize to similar tasks. We propose a model called MPLR that improves
the existing models to fully use training data and multi-target scenarios are
considered. In addition, considering the deficiency in evaluating the
performance of models and the quality of mined rules, we further propose two
novel indicators to help with the problem. Experimental results empirically
demonstrate that our MPLR model outperforms state-of-the-art methods on five
benchmark datasets. The results also prove the effectiveness of the indicators.
Related papers
- Disentangled Generative Graph Representation Learning [51.59824683232925]
This paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework.
It aims to learn latent disentangled factors and utilize them to guide graph mask modeling.
Experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods.
arXiv Detail & Related papers (2024-08-24T05:13:02Z) - Neural Probabilistic Logic Learning for Knowledge Graph Reasoning [10.473897846826956]
This paper aims to design a reasoning framework that achieves accurate reasoning on knowledge graphs.
We introduce a scoring module that effectively enhances the expressive power of embedding networks.
We improve the interpretability of the model by incorporating a Markov Logic Network based on variational inference.
arXiv Detail & Related papers (2024-07-04T07:45:46Z) - Exploring Large Language Models for Knowledge Graph Completion [17.139056629060626]
We consider triples in knowledge graphs as text sequences and introduce an innovative framework called Knowledge Graph LLM.
Our technique employs entity and relation descriptions of a triple as prompts and utilizes the response for predictions.
Experiments on various benchmark knowledge graphs demonstrate that our method attains state-of-the-art performance in tasks such as triple classification and relation prediction.
arXiv Detail & Related papers (2023-08-26T16:51:17Z) - Deep Manifold Learning for Reading Comprehension and Logical Reasoning
Tasks with Polytuplet Loss [0.0]
The current trend in developing machine learning models for reading comprehension and logical reasoning tasks is focused on improving the models' abilities to understand and utilize logical rules.
This work focuses on providing a novel loss function and accompanying model architecture that has more interpretable components than some other models.
Our strategy involves emphasizing relative accuracy over absolute accuracy and can theoretically produce the correct answer with incomplete knowledge.
arXiv Detail & Related papers (2023-04-03T14:48:34Z) - A Survey of Knowledge Graph Reasoning on Graph Types: Static, Dynamic,
and Multimodal [57.8455911689554]
Knowledge graph reasoning (KGR) aims to deduce new facts from existing facts based on mined logic rules underlying knowledge graphs (KGs)
It has been proven to significantly benefit the usage of KGs in many AI applications, such as question answering, recommendation systems, and etc.
arXiv Detail & Related papers (2022-12-12T08:40:04Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - DegreEmbed: incorporating entity embedding into logic rule learning for
knowledge graph reasoning [7.066269573204757]
Link prediction for knowledge graphs is the task aiming to complete missing facts by reasoning based on the existing knowledge.
We propose DegreEmbed, a model that combines embedding-based learning and logic rule mining for inferring on KGs.
arXiv Detail & Related papers (2021-12-18T13:38:48Z) - A Multi-Level Attention Model for Evidence-Based Fact Checking [58.95413968110558]
We present a simple model that can be trained on sequence structures.
Results on a large-scale dataset for Fact Extraction and VERification show that our model outperforms the graph-based approaches.
arXiv Detail & Related papers (2021-06-02T05:40:12Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z) - Knowledge-driven Data Construction for Zero-shot Evaluation in
Commonsense Question Answering [80.60605604261416]
We propose a novel neuro-symbolic framework for zero-shot question answering across commonsense tasks.
We vary the set of language models, training regimes, knowledge sources, and data generation strategies, and measure their impact across tasks.
We show that, while an individual knowledge graph is better suited for specific tasks, a global knowledge graph brings consistent gains across different tasks.
arXiv Detail & Related papers (2020-11-07T22:52:21Z) - A Hybrid Model for Learning Embeddings and Logical Rules Simultaneously
from Knowledge Graphs [20.438750956142638]
We develop a hybrid model that learns both high-quality rules and embeddings simultaneously.
Our method uses a cross feedback paradigm wherein, an embedding model is used to guide the search of a rule mining system to mine rules and infer new facts.
arXiv Detail & Related papers (2020-09-22T20:29:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.