Question Answering over Knowledge Base using Language Model Embeddings
- URL: http://arxiv.org/abs/2010.08883v1
- Date: Sat, 17 Oct 2020 22:59:34 GMT
- Title: Question Answering over Knowledge Base using Language Model Embeddings
- Authors: Sai Sharath Japa and Rekabdar Banafsheh
- Abstract summary: This paper focuses on using a pre-trained language model for the Knowledge Base Question Answering task.
We further fine-tuned these embeddings with a two-way attention mechanism from the knowledge base to the asked question.
Our method is based on a simple Convolutional Neural Network architecture with a Multi-Head Attention mechanism to represent the asked question.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Base, represents facts about the world, often in some form of
subsumption ontology, rather than implicitly, embedded in procedural code, the
way a conventional computer program does. While there is a rapid growth in
knowledge bases, it poses a challenge of retrieving information from them.
Knowledge Base Question Answering is one of the promising approaches for
extracting substantial knowledge from Knowledge Bases. Unlike web search,
Question Answering over a knowledge base gives accurate and concise results,
provided that natural language questions can be understood and mapped precisely
to an answer in the knowledge base. However, some of the existing
embedding-based methods for knowledge base question answering systems ignore
the subtle correlation between the question and the Knowledge Base (e.g.,
entity types, relation paths, and context) and suffer from the Out Of
Vocabulary problem. In this paper, we focused on using a pre-trained language
model for the Knowledge Base Question Answering task. Firstly, we used Bert
base uncased for the initial experiments. We further fine-tuned these
embeddings with a two-way attention mechanism from the knowledge base to the
asked question and from the asked question to the knowledge base answer
aspects. Our method is based on a simple Convolutional Neural Network
architecture with a Multi-Head Attention mechanism to represent the asked
question dynamically in multiple aspects. Our experimental results show the
effectiveness and the superiority of the Bert pre-trained language model
embeddings for question answering systems on knowledge bases over other
well-known embedding methods.
Related papers
- A Knowledge Plug-and-Play Test Bed for Open-domain Dialogue Generation [51.31429493814664]
We present a benchmark named multi-source Wizard of Wikipedia for evaluating multi-source dialogue knowledge selection and response generation.
We propose a new challenge, dialogue knowledge plug-and-play, which aims to test an already trained dialogue model on using new support knowledge from previously unseen sources.
arXiv Detail & Related papers (2024-03-06T06:54:02Z) - A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA [67.75989848202343]
This paper presents a unified end-to-end retriever-reader framework towards knowledge-based VQA.
We shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning.
Our scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering.
arXiv Detail & Related papers (2022-06-30T02:35:04Z) - Coarse-to-Careful: Seeking Semantic-related Knowledge for Open-domain
Commonsense Question Answering [12.406729445165857]
It is prevalent to utilize external knowledge to help machine answer questions that need background commonsense.
We propose a semantic-driven knowledge-aware QA framework, which controls the knowledge injection in a coarse-to-careful fashion.
arXiv Detail & Related papers (2021-07-04T10:56:36Z) - Contextualized Knowledge-aware Attentive Neural Network: Enhancing
Answer Selection with Knowledge [77.77684299758494]
We extensively investigate approaches to enhancing the answer selection model with external knowledge from knowledge graph (KG)
First, we present a context-knowledge interaction learning framework, Knowledge-aware Neural Network (KNN), which learns the QA sentence representations by considering a tight interaction with the external knowledge from KG and the textual information.
To handle the diversity and complexity of KG information, we propose a Contextualized Knowledge-aware Attentive Neural Network (CKANN), which improves the knowledge representation learning with structure information via a customized Graph Convolutional Network (GCN) and comprehensively learns context-based and knowledge-based sentence representation via
arXiv Detail & Related papers (2021-04-12T05:52:20Z) - Incremental Knowledge Based Question Answering [52.041815783025186]
We propose a new incremental KBQA learning framework that can progressively expand learning capacity as humans do.
Specifically, it comprises a margin-distilled loss and a collaborative selection method, to overcome the catastrophic forgetting problem.
The comprehensive experiments demonstrate its effectiveness and efficiency when working with the evolving knowledge base.
arXiv Detail & Related papers (2021-01-18T09:03:38Z) - KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain
Knowledge-Based VQA [107.7091094498848]
One of the most challenging question types in VQA is when answering the question requires outside knowledge not present in the image.
In this work we study open-domain knowledge, the setting when the knowledge required to answer a question is not given/annotated, neither at training nor test time.
We tap into two types of knowledge representations and reasoning. First, implicit knowledge which can be learned effectively from unsupervised language pre-training and supervised training data with transformer-based models.
arXiv Detail & Related papers (2020-12-20T20:13:02Z) - A Data-Driven Study of Commonsense Knowledge using the ConceptNet
Knowledge Base [8.591839265985412]
Acquiring commonsense knowledge and reasoning is recognized as an important frontier in achieving general Artificial Intelligence (AI)
In this paper, we propose and conduct a systematic study to enable a deeper understanding of commonsense knowledge by doing an empirical and structural analysis of the ConceptNet knowledge base.
Detailed experimental results on three carefully designed research questions, using state-of-the-art unsupervised graph representation learning ('embedding') and clustering techniques, reveal deep substructures in ConceptNet relations.
arXiv Detail & Related papers (2020-11-28T08:08:25Z) - Improving Commonsense Question Answering by Graph-based Iterative
Retrieval over Multiple Knowledge Sources [26.256653692882715]
How to engage commonsense effectively in question answering systems is still under exploration.
We propose a novel question-answering method by integrating ConceptNet, Wikipedia, and the Cambridge Dictionary.
We use a pre-trained language model to encode the question, retrieved knowledge and choices, and propose an answer choice-aware attention mechanism.
arXiv Detail & Related papers (2020-11-05T08:50:43Z) - Unsupervised Commonsense Question Answering with Self-Talk [71.63983121558843]
We propose an unsupervised framework based on self-talk as a novel alternative to commonsense tasks.
Inspired by inquiry-based discovery learning, our approach inquires language models with a number of information seeking questions.
Empirical results demonstrate that the self-talk procedure substantially improves the performance of zero-shot language model baselines.
arXiv Detail & Related papers (2020-04-11T20:43:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.