Grape: Knowledge Graph Enhanced Passage Reader for Open-domain Question
Answering
- URL: http://arxiv.org/abs/2210.02933v2
- Date: Mon, 10 Oct 2022 01:12:29 GMT
- Title: Grape: Knowledge Graph Enhanced Passage Reader for Open-domain Question
Answering
- Authors: Mingxuan Ju, Wenhao Yu, Tong Zhao, Chuxu Zhang, Yanfang Ye
- Abstract summary: A common thread of open-domain question answering (QA) models employs a retriever-reader pipeline that first retrieves a handful of relevant passages from Wikipedia.
We propose a novel knowledge Graph enhanced passage reader, namely Grape, to improve the reader performance for open-domain QA.
- Score: 36.85435188308579
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A common thread of open-domain question answering (QA) models employs a
retriever-reader pipeline that first retrieves a handful of relevant passages
from Wikipedia and then peruses the passages to produce an answer. However,
even state-of-the-art readers fail to capture the complex relationships between
entities appearing in questions and retrieved passages, leading to answers that
contradict the facts. In light of this, we propose a novel knowledge Graph
enhanced passage reader, namely Grape, to improve the reader performance for
open-domain QA. Specifically, for each pair of question and retrieved passage,
we first construct a localized bipartite graph, attributed to entity embeddings
extracted from the intermediate layer of the reader model. Then, a graph neural
network learns relational knowledge while fusing graph and contextual
representations into the hidden states of the reader model. Experiments on
three open-domain QA benchmarks show Grape can improve the state-of-the-art
performance by up to 2.2 exact match score with a negligible overhead increase,
with the same retriever and retrieved passages. Our code is publicly available
at https://github.com/jumxglhf/GRAPE.
Related papers
- QPaug: Question and Passage Augmentation for Open-Domain Question Answering of LLMs [5.09189220106765]
We propose a simple yet efficient method called question and passage augmentation (QPaug) via large language models (LLMs) for open-domain question-answering tasks.
Experimental results show that QPaug outperforms the previous state-of-the-art and achieves significant performance gain over existing RAG methods.
arXiv Detail & Related papers (2024-06-20T12:59:27Z) - Single Sequence Prediction over Reasoning Graphs for Multi-hop QA [8.442412179333205]
We propose a single-sequence prediction method over a local reasoning graph (model)footnoteCode/Models.
We use a graph neural network to encode this graph structure and fuse the resulting representations into the entity representations of the model.
Our experiments show significant improvements in answer exact-match/F1 scores and faithfulness of grounding in the reasoning path.
arXiv Detail & Related papers (2023-07-01T13:15:09Z) - Open-domain Question Answering via Chain of Reasoning over Heterogeneous
Knowledge [82.5582220249183]
We propose a novel open-domain question answering (ODQA) framework for answering single/multi-hop questions across heterogeneous knowledge sources.
Unlike previous methods that solely rely on the retriever for gathering all evidence in isolation, our intermediary performs a chain of reasoning over the retrieved set.
Our system achieves competitive performance on two ODQA datasets, OTT-QA and NQ, against tables and passages from Wikipedia.
arXiv Detail & Related papers (2022-10-22T03:21:32Z) - Generate rather than Retrieve: Large Language Models are Strong Context
Generators [74.87021992611672]
We present a novel perspective for solving knowledge-intensive tasks by replacing document retrievers with large language model generators.
We call our method generate-then-read (GenRead), which first prompts a large language model to generate contextutal documents based on a given question, and then reads the generated documents to produce the final answer.
arXiv Detail & Related papers (2022-09-21T01:30:59Z) - Multifaceted Improvements for Conversational Open-Domain Question
Answering [54.913313912927045]
We propose a framework with Multifaceted Improvements for Conversational open-domain Question Answering (MICQA)
Firstly, the proposed KL-divergence based regularization is able to lead to a better question understanding for retrieval and answer reading.
Second, the added post-ranker module can push more relevant passages to the top placements and be selected for reader with a two-aspect constrains.
Third, the well designed curriculum learning strategy effectively narrows the gap between the golden passage settings of training and inference, and encourages the reader to find true answer without the golden passage assistance.
arXiv Detail & Related papers (2022-04-01T07:54:27Z) - Open Domain Question Answering over Virtual Documents: A Unified
Approach for Data and Text [62.489652395307914]
We use the data-to-text method as a means for encoding structured knowledge for knowledge-intensive applications, i.e. open-domain question answering (QA)
Specifically, we propose a verbalizer-retriever-reader framework for open-domain QA over data and text where verbalized tables from Wikipedia and triples from Wikidata are used as augmented knowledge sources.
We show that our Unified Data and Text QA, UDT-QA, can effectively benefit from the expanded knowledge index, leading to large gains over text-only baselines.
arXiv Detail & Related papers (2021-10-16T00:11:21Z) - Open-Domain Question Answering with Pre-Constructed Question Spaces [70.13619499853756]
Open-domain question answering aims at solving the task of locating the answers to user-generated questions in massive collections of documents.
There are two families of solutions available: retriever-readers, and knowledge-graph-based approaches.
We propose a novel algorithm with a reader-retriever structure that differs from both families.
arXiv Detail & Related papers (2020-06-02T04:31:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.