When Retriever-Reader Meets Scenario-Based Multiple-Choice Questions
- URL: http://arxiv.org/abs/2108.13875v1
- Date: Tue, 31 Aug 2021 14:32:04 GMT
- Title: When Retriever-Reader Meets Scenario-Based Multiple-Choice Questions
- Authors: Zixian Huang, Ao Wu, Yulin Shen, Gong Cheng, Yuzhong Qu
- Abstract summary: We propose a joint retriever-reader model called QAVES where the retriever is implicitly supervised only using relevance labels via a novel word weighting mechanism.
QAVES significantly outperforms a variety of strong baselines on multiple-choice questions in three SQA datasets.
- Score: 15.528174963480614
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Scenario-based question answering (SQA) requires retrieving and reading
paragraphs from a large corpus to answer a question which is contextualized by
a long scenario description. Since a scenario contains both keyphrases for
retrieval and much noise, retrieval for SQA is extremely difficult. Moreover,
it can hardly be supervised due to the lack of relevance labels of paragraphs
for SQA. To meet the challenge, in this paper we propose a joint
retriever-reader model called JEEVES where the retriever is implicitly
supervised only using QA labels via a novel word weighting mechanism. JEEVES
significantly outperforms a variety of strong baselines on multiple-choice
questions in three SQA datasets.
Related papers
- RAG-QA Arena: Evaluating Domain Robustness for Long-form Retrieval Augmented Question Answering [61.19126689470398]
Long-form RobustQA (LFRQA) is a new dataset covering 26K queries and large corpora across seven different domains.
We show via experiments that RAG-QA Arena and human judgments on answer quality are highly correlated.
Only 41.3% of the most competitive LLM's answers are preferred to LFRQA's answers, demonstrating RAG-QA Arena as a challenging evaluation platform for future research.
arXiv Detail & Related papers (2024-07-19T03:02:51Z) - MFORT-QA: Multi-hop Few-shot Open Rich Table Question Answering [3.1651118728570635]
In today's fast-paced industry, professionals face the challenge of summarizing a large number of documents and extracting vital information from them on a daily basis.
To address this challenge, the approach of Table Question Answering (QA) has been developed to extract the relevant information.
Recent advancements in Large Language Models (LLMs) have opened up new possibilities for extracting information from tabular data using prompts.
arXiv Detail & Related papers (2024-03-28T03:14:18Z) - SpeechDPR: End-to-End Spoken Passage Retrieval for Open-Domain Spoken Question Answering [76.4510005602893]
Spoken Question Answering (SQA) is essential for machines to reply to user's question by finding the answer span within a given spoken passage.
This paper proposes the first known end-to-end framework, Speech Passage Retriever (SpeechDPR)
SpeechDPR learns a sentence-level semantic representation by distilling knowledge from the cascading model of unsupervised ASR (UASR) and dense text retriever (TDR)
arXiv Detail & Related papers (2024-01-24T14:08:38Z) - Improving Question Generation with Multi-level Content Planning [70.37285816596527]
This paper addresses the problem of generating questions from a given context and an answer, specifically focusing on questions that require multi-hop reasoning across an extended context.
We propose MultiFactor, a novel QG framework based on multi-level content planning. Specifically, MultiFactor includes two components: FA-model, which simultaneously selects key phrases and generates full answers, and Q-model which takes the generated full answer as an additional input to generate questions.
arXiv Detail & Related papers (2023-10-20T13:57:01Z) - SQUARE: Automatic Question Answering Evaluation using Multiple Positive
and Negative References [73.67707138779245]
We propose a new evaluation metric: SQuArE (Sentence-level QUestion AnsweRing Evaluation)
We evaluate SQuArE on both sentence-level extractive (Answer Selection) and generative (GenQA) QA systems.
arXiv Detail & Related papers (2023-09-21T16:51:30Z) - Multifaceted Improvements for Conversational Open-Domain Question
Answering [54.913313912927045]
We propose a framework with Multifaceted Improvements for Conversational open-domain Question Answering (MICQA)
Firstly, the proposed KL-divergence based regularization is able to lead to a better question understanding for retrieval and answer reading.
Second, the added post-ranker module can push more relevant passages to the top placements and be selected for reader with a two-aspect constrains.
Third, the well designed curriculum learning strategy effectively narrows the gap between the golden passage settings of training and inference, and encourages the reader to find true answer without the golden passage assistance.
arXiv Detail & Related papers (2022-04-01T07:54:27Z) - TSQA: Tabular Scenario Based Question Answering [14.92495213480887]
scenario-based question answering (SQA) has attracted an increasing research interest.
To support the study of this task, we construct GeoTSQA.
We extend state-of-the-art MRC methods with TTGen, a novel table-to-text generator.
arXiv Detail & Related papers (2021-01-14T02:00:33Z) - Effective FAQ Retrieval and Question Matching With Unsupervised
Knowledge Injection [10.82418428209551]
We propose a contextual language model for retrieving appropriate answers to frequently asked questions.
We also explore to capitalize on domain-specific topically-relevant relations between words in an unsupervised manner.
We evaluate variants of our approach on a publicly-available Chinese FAQ dataset, and further apply and contextualize it to a large-scale question-matching task.
arXiv Detail & Related papers (2020-10-27T05:03:34Z) - Open Question Answering over Tables and Text [55.8412170633547]
In open question answering (QA), the answer to a question is produced by retrieving and then analyzing documents that might contain answers to the question.
Most open QA systems have considered only retrieving information from unstructured text.
We present a new large-scale dataset Open Table-and-Text Question Answering (OTT-QA) to evaluate performance on this task.
arXiv Detail & Related papers (2020-10-20T16:48:14Z) - Relevance-guided Supervision for OpenQA with ColBERT [27.599190047511033]
ColBERT-QA adapts the scalable neural retrieval model ColBERT to OpenQA.
ColBERT creates fine-grained interactions between questions and passages.
This greatly improves OpenQA retrieval on Natural Questions, SQuAD, and TriviaQA.
arXiv Detail & Related papers (2020-07-01T23:50:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.