Step out of KG: Knowledge Graph Completion via Knowledgeable Retrieval
and Reading Comprehension
- URL: http://arxiv.org/abs/2210.05921v1
- Date: Wed, 12 Oct 2022 04:50:55 GMT
- Title: Step out of KG: Knowledge Graph Completion via Knowledgeable Retrieval
and Reading Comprehension
- Authors: Xin Lv, Yankai Lin, Zijun Yao, Kaisheng Zeng, Jiajie Zhang, Lei Hou
and Juanzi Li
- Abstract summary: We propose a new model based on information retrieval and reading comprehension, namely IR4KGC.
Specifically, we pre-train a knowledge-based information retrieval module that can retrieve documents related to the triples to be completed. Then, the retrieved documents are handed over to the reading comprehension module to generate the predicted answers.
In experiments, we find that our model can well solve relations that cannot be inferred from existing knowledge, and achieve good results on KGC datasets.
- Score: 33.77833862891007
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graphs, as the cornerstone of many AI applications, usually face
serious incompleteness problems. In recent years, there have been many efforts
to study automatic knowledge graph completion (KGC), most of which use existing
knowledge to infer new knowledge. However, in our experiments, we find that not
all relations can be obtained by inference, which constrains the performance of
existing models. To alleviate this problem, we propose a new model based on
information retrieval and reading comprehension, namely IR4KGC. Specifically,
we pre-train a knowledge-based information retrieval module that can retrieve
documents related to the triples to be completed. Then, the retrieved documents
are handed over to the reading comprehension module to generate the predicted
answers. In experiments, we find that our model can well solve relations that
cannot be inferred from existing knowledge, and achieve good results on KGC
datasets.
Related papers
- Large Language Models Struggle to Learn Long-Tail Knowledge [39.01608375863687]
We study the relationship between the knowledge memorized by large language models and the information in pre-training datasets scraped from the web.
In particular, we show that a language model's ability to answer a fact-based question relates to how many documents associated with that question were seen during pre-training.
arXiv Detail & Related papers (2022-11-15T18:49:27Z) - A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA [67.75989848202343]
This paper presents a unified end-to-end retriever-reader framework towards knowledge-based VQA.
We shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning.
Our scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering.
arXiv Detail & Related papers (2022-06-30T02:35:04Z) - BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from
Pretrained Language Models [65.51390418485207]
We propose a new approach of harvesting massive KGs of arbitrary relations from pretrained LMs.
With minimal input of a relation definition, the approach efficiently searches in the vast entity pair space to extract diverse accurate knowledge.
We deploy the approach to harvest KGs of over 400 new relations from different LMs.
arXiv Detail & Related papers (2022-06-28T19:46:29Z) - Embedding Knowledge for Document Summarization: A Survey [66.76415502727802]
Previous works proved that knowledge-embedded document summarizers excel at generating superior digests.
We propose novel to recapitulate knowledge and knowledge embeddings under the document summarization view.
arXiv Detail & Related papers (2022-04-24T04:36:07Z) - Towards a Universal Continuous Knowledge Base [49.95342223987143]
We propose a method for building a continuous knowledge base that can store knowledge imported from multiple neural networks.
Experiments on text classification show promising results.
We import the knowledge from multiple models to the knowledge base, from which the fused knowledge is exported back to a single model.
arXiv Detail & Related papers (2020-12-25T12:27:44Z) - PPKE: Knowledge Representation Learning by Path-based Pre-training [43.41597219004598]
We propose a Path-based Pre-training model to learn Knowledge Embeddings, called PPKE.
Our model achieves state-of-the-art results on several benchmark datasets for link prediction and relation prediction tasks.
arXiv Detail & Related papers (2020-12-07T10:29:30Z) - Improving Commonsense Question Answering by Graph-based Iterative
Retrieval over Multiple Knowledge Sources [26.256653692882715]
How to engage commonsense effectively in question answering systems is still under exploration.
We propose a novel question-answering method by integrating ConceptNet, Wikipedia, and the Cambridge Dictionary.
We use a pre-trained language model to encode the question, retrieved knowledge and choices, and propose an answer choice-aware attention mechanism.
arXiv Detail & Related papers (2020-11-05T08:50:43Z) - ENT-DESC: Entity Description Generation by Exploring Knowledge Graph [53.03778194567752]
In practice, the input knowledge could be more than enough, since the output description may only cover the most significant knowledge.
We introduce a large-scale and challenging dataset to facilitate the study of such a practical scenario in KG-to-text.
We propose a multi-graph structure that is able to represent the original graph information more comprehensively.
arXiv Detail & Related papers (2020-04-30T14:16:19Z) - Generative Adversarial Zero-Shot Relational Learning for Knowledge
Graphs [96.73259297063619]
We consider a novel formulation, zero-shot learning, to free this cumbersome curation.
For newly-added relations, we attempt to learn their semantic features from their text descriptions.
We leverage Generative Adrial Networks (GANs) to establish the connection between text and knowledge graph domain.
arXiv Detail & Related papers (2020-01-08T01:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.