A Systematic Investigation of KB-Text Embedding Alignment at Scale
- URL: http://arxiv.org/abs/2106.01586v1
- Date: Thu, 3 Jun 2021 04:14:11 GMT
- Title: A Systematic Investigation of KB-Text Embedding Alignment at Scale
- Authors: Vardaan Pahuja, Yu Gu, Wenhu Chen, Mehdi Bahrami, Lei Liu, Wei-Peng
Chen and Yu Su
- Abstract summary: Knowledge bases (KBs) and text often contain complementary knowledge.
How to jointly embed and reason with both knowledge sources to fully leverage the complementary information is still largely an open problem.
We conduct a large-scale, systematic investigation of aligning KB and text embeddings for joint reasoning.
- Score: 17.636921566637298
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge bases (KBs) and text often contain complementary knowledge: KBs
store structured knowledge that can support long range reasoning, while text
stores more comprehensive and timely knowledge in an unstructured way.
Separately embedding the individual knowledge sources into vector spaces has
demonstrated tremendous successes in encoding the respective knowledge, but how
to jointly embed and reason with both knowledge sources to fully leverage the
complementary information is still largely an open problem. We conduct a
large-scale, systematic investigation of aligning KB and text embeddings for
joint reasoning. We set up a novel evaluation framework with two evaluation
tasks, few-shot link prediction and analogical reasoning, and evaluate an array
of KB-text embedding alignment methods. We also demonstrate how such alignment
can infuse textual information into KB embeddings for more accurate link
prediction on emerging entities and events, using COVID-19 as a case study.
Related papers
- Bridging the KB-Text Gap: Leveraging Structured Knowledge-aware
Pre-training for KBQA [28.642711264323786]
We propose a Structured Knowledge-aware Pre-training method (SKP) to bridge the gap between texts and structured KBs.
In the pre-training stage, we introduce two novel structured knowledge-aware tasks, guiding the model to effectively learn the implicit relationship and better representations of complex subgraphs.
In the downstream KBQA task, we further design an efficient linearization strategy and an interval attention mechanism, which assist the model to better encode complex subgraphs.
arXiv Detail & Related papers (2023-08-28T09:22:02Z) - KnowledGPT: Enhancing Large Language Models with Retrieval and Storage
Access on Knowledge Bases [55.942342665806656]
KnowledGPT is a comprehensive framework to bridge large language models with various knowledge bases.
The retrieval process employs the program of thought prompting, which generates search language for KBs in code format.
KnowledGPT offers the capability to store knowledge in a personalized KB, catering to individual user demands.
arXiv Detail & Related papers (2023-08-17T13:07:00Z) - Mapping and Cleaning Open Commonsense Knowledge Bases with Generative
Translation [14.678465723838599]
In particular, open information extraction (OpenIE) is often used to induce structure from a text.
OpenIEs contain an open-ended, non-canonicalized set of relations, making the extracted knowledge's downstream exploitation harder.
We propose approaching the problem by generative translation, i.e., by training a language model to generate fixed- assertions from open ones.
arXiv Detail & Related papers (2023-06-22T09:42:54Z) - Completeness, Recall, and Negation in Open-World Knowledge Bases: A
Survey [15.221057217833492]
We discuss how knowledge about completeness, recall, and negation in KBs can be expressed, extracted, and inferred.
This survey is targeted at two types of audiences: (1) practitioners who are interested in tracking KB quality, focusing extraction efforts, and building quality-aware downstream applications; and (2) data management, knowledge base and semantic web researchers who wish to understand the state of the art of knowledge bases beyond the open-world assumption.
arXiv Detail & Related papers (2023-05-09T12:50:16Z) - QA Is the New KR: Question-Answer Pairs as Knowledge Bases [105.692569000534]
We argue that the proposed type of KB has many of the key advantages of a traditional symbolic KB.
Unlike a traditional KB, this information store is well-aligned with common user information needs.
arXiv Detail & Related papers (2022-07-01T19:09:08Z) - Relational world knowledge representation in contextual language models:
A review [19.176173014629185]
We take a natural language processing perspective to the limitations of knowledge bases (KBs)
We propose a novel taxonomy for relational knowledge representation in contextual language models (LMs)
arXiv Detail & Related papers (2021-04-12T21:50:55Z) - Contextualized Knowledge-aware Attentive Neural Network: Enhancing
Answer Selection with Knowledge [77.77684299758494]
We extensively investigate approaches to enhancing the answer selection model with external knowledge from knowledge graph (KG)
First, we present a context-knowledge interaction learning framework, Knowledge-aware Neural Network (KNN), which learns the QA sentence representations by considering a tight interaction with the external knowledge from KG and the textual information.
To handle the diversity and complexity of KG information, we propose a Contextualized Knowledge-aware Attentive Neural Network (CKANN), which improves the knowledge representation learning with structure information via a customized Graph Convolutional Network (GCN) and comprehensively learns context-based and knowledge-based sentence representation via
arXiv Detail & Related papers (2021-04-12T05:52:20Z) - Reasoning over Vision and Language: Exploring the Benefits of
Supplemental Knowledge [59.87823082513752]
This paper investigates the injection of knowledge from general-purpose knowledge bases (KBs) into vision-and-language transformers.
We empirically study the relevance of various KBs to multiple tasks and benchmarks.
The technique is model-agnostic and can expand the applicability of any vision-and-language transformer with minimal computational overhead.
arXiv Detail & Related papers (2021-01-15T08:37:55Z) - Improving Machine Reading Comprehension with Contextualized Commonsense
Knowledge [62.46091695615262]
We aim to extract commonsense knowledge to improve machine reading comprehension.
We propose to represent relations implicitly by situating structured knowledge in a context.
We employ a teacher-student paradigm to inject multiple types of contextualized knowledge into a student machine reader.
arXiv Detail & Related papers (2020-09-12T17:20:01Z) - A Survey on Complex Question Answering over Knowledge Base: Recent
Advances and Challenges [71.4531144086568]
Question Answering (QA) over Knowledge Base (KB) aims to automatically answer natural language questions.
Researchers have shifted their attention from simple questions to complex questions, which require more KB triples and constraint inference.
arXiv Detail & Related papers (2020-07-26T07:13:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.