Knowledge Fusion and Semantic Knowledge Ranking for Open Domain Question
Answering
- URL: http://arxiv.org/abs/2004.03101v2
- Date: Fri, 17 Apr 2020 06:46:36 GMT
- Title: Knowledge Fusion and Semantic Knowledge Ranking for Open Domain Question
Answering
- Authors: Pratyay Banerjee and Chitta Baral
- Abstract summary: Open Domain Question Answering requires systems to retrieve external knowledge and perform multi-hop reasoning.
We learn a semantic knowledge ranking model to re-rank knowledge retrieved through Lucene based information retrieval systems.
We propose a "knowledge fusion model" which leverages knowledge in BERT-based language models with externally retrieved knowledge.
- Score: 33.920269584939334
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Open Domain Question Answering requires systems to retrieve external
knowledge and perform multi-hop reasoning by composing knowledge spread over
multiple sentences. In the recently introduced open domain question answering
challenge datasets, QASC and OpenBookQA, we need to perform retrieval of facts
and compose facts to correctly answer questions. In our work, we learn a
semantic knowledge ranking model to re-rank knowledge retrieved through Lucene
based information retrieval systems. We further propose a "knowledge fusion
model" which leverages knowledge in BERT-based language models with externally
retrieved knowledge and improves the knowledge understanding of the BERT-based
language models. On both OpenBookQA and QASC datasets, the knowledge fusion
model with semantically re-ranked knowledge outperforms previous attempts.
Related papers
- Knowledge Acquisition Disentanglement for Knowledge-based Visual Question Answering with Large Language Models [10.526705722339775]
Knowledge-based Visual Question Answering (KVQA) requires both image and world knowledge to answer questions.
Current methods first retrieve knowledge from the image and external knowledge base with the original complex question, then generate answers with Large Language Models (LLMs)
We propose DKA: Disentangled Knowledge Acquisition from LLM feedback, a training-free framework that disentangles knowledge acquisition to avoid confusion.
arXiv Detail & Related papers (2024-07-22T03:05:32Z) - Towards Better Generalization in Open-Domain Question Answering by Mitigating Context Memorization [67.92796510359595]
Open-domain Question Answering (OpenQA) aims at answering factual questions with an external large-scale knowledge corpus.
It is still unclear how well an OpenQA model can transfer to completely new knowledge domains.
We introduce Corpus-Invariant Tuning (CIT), a simple but effective training strategy, to mitigate the knowledge over-memorization.
arXiv Detail & Related papers (2024-04-02T05:44:50Z) - DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain
Question Answering over Knowledge Base and Text [73.68051228972024]
Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when relying on their internal knowledge.
Retrieval-augmented LLMs have emerged as a potential solution to ground LLMs in external knowledge.
arXiv Detail & Related papers (2023-10-31T04:37:57Z) - Merging Generated and Retrieved Knowledge for Open-Domain QA [72.42262579925911]
COMBO is a compatibility-Oriented knowledge Merging for Better Open-domain QA framework.
We show that COMBO outperforms competitive baselines on three out of four tested open-domain QA benchmarks.
arXiv Detail & Related papers (2023-10-22T19:37:06Z) - Structured Knowledge Grounding for Question Answering [0.23068481501673416]
We propose to leverage the language and knowledge for knowledge based question-answering with flexibility, breadth of coverage and structured reasoning.
Specifically, we devise a knowledge construction method that retrieves the relevant context with a dynamic hop.
And we devise a deep fusion mechanism to further bridge the information exchanging bottleneck between the language and the knowledge.
arXiv Detail & Related papers (2022-09-17T08:48:50Z) - Kformer: Knowledge Injection in Transformer Feed-Forward Layers [107.71576133833148]
We propose a novel knowledge fusion model, namely Kformer, which incorporates external knowledge through the feed-forward layer in Transformer.
We empirically find that simply injecting knowledge into FFN can facilitate the pre-trained language model's ability and facilitate current knowledge fusion methods.
arXiv Detail & Related papers (2022-01-15T03:00:27Z) - Coarse-to-Careful: Seeking Semantic-related Knowledge for Open-domain
Commonsense Question Answering [12.406729445165857]
It is prevalent to utilize external knowledge to help machine answer questions that need background commonsense.
We propose a semantic-driven knowledge-aware QA framework, which controls the knowledge injection in a coarse-to-careful fashion.
arXiv Detail & Related papers (2021-07-04T10:56:36Z) - Contextualized Knowledge-aware Attentive Neural Network: Enhancing
Answer Selection with Knowledge [77.77684299758494]
We extensively investigate approaches to enhancing the answer selection model with external knowledge from knowledge graph (KG)
First, we present a context-knowledge interaction learning framework, Knowledge-aware Neural Network (KNN), which learns the QA sentence representations by considering a tight interaction with the external knowledge from KG and the textual information.
To handle the diversity and complexity of KG information, we propose a Contextualized Knowledge-aware Attentive Neural Network (CKANN), which improves the knowledge representation learning with structure information via a customized Graph Convolutional Network (GCN) and comprehensively learns context-based and knowledge-based sentence representation via
arXiv Detail & Related papers (2021-04-12T05:52:20Z) - KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain
Knowledge-Based VQA [107.7091094498848]
One of the most challenging question types in VQA is when answering the question requires outside knowledge not present in the image.
In this work we study open-domain knowledge, the setting when the knowledge required to answer a question is not given/annotated, neither at training nor test time.
We tap into two types of knowledge representations and reasoning. First, implicit knowledge which can be learned effectively from unsupervised language pre-training and supervised training data with transformer-based models.
arXiv Detail & Related papers (2020-12-20T20:13:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.