Can Question Rewriting Help Conversational Question Answering?
- URL: http://arxiv.org/abs/2204.06239v1
- Date: Wed, 13 Apr 2022 08:16:03 GMT
- Title: Can Question Rewriting Help Conversational Question Answering?
- Authors: Etsuko Ishii, Yan Xu, Samuel Cahyawijaya, Bryan Wilie
- Abstract summary: Question rewriting (QR) is a subtask of conversational question answering (CQA)
We investigate a reinforcement learning approach that integrates QR and CQA tasks and does not require corresponding QR datasets for targeted CQA.
We find, however, that the RL method is on par with the end-to-end baseline.
- Score: 13.484873786389471
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Question rewriting (QR) is a subtask of conversational question answering
(CQA) aiming to ease the challenges of understanding dependencies among
dialogue history by reformulating questions in a self-contained form. Despite
seeming plausible, little evidence is available to justify QR as a mitigation
method for CQA. To verify the effectiveness of QR in CQA, we investigate a
reinforcement learning approach that integrates QR and CQA tasks and does not
require corresponding QR datasets for targeted CQA. We find, however, that the
RL method is on par with the end-to-end baseline. We provide an analysis of the
failure and describe the difficulty of exploiting QR for CQA.
Related papers
- Reverse Question Answering: Can an LLM Write a Question so Hard (or Bad) that it Can't Answer? [24.614521528699093]
Past work tests QA and RQA separately, but we test them jointly, comparing their difficulty, aiding benchmark design, and assessing reasoning consistency.
16 LLMs run QA and RQA with trivia questions/answers, showing: 1) Versus QA, LLMs are much less accurate in RQA for numerical answers, but slightly more accurate in RQA for textual answers.
arXiv Detail & Related papers (2024-10-20T21:17:49Z) - SpeechDPR: End-to-End Spoken Passage Retrieval for Open-Domain Spoken Question Answering [76.4510005602893]
Spoken Question Answering (SQA) is essential for machines to reply to user's question by finding the answer span within a given spoken passage.
This paper proposes the first known end-to-end framework, Speech Passage Retriever (SpeechDPR)
SpeechDPR learns a sentence-level semantic representation by distilling knowledge from the cascading model of unsupervised ASR (UASR) and dense text retriever (TDR)
arXiv Detail & Related papers (2024-01-24T14:08:38Z) - SQUARE: Automatic Question Answering Evaluation using Multiple Positive
and Negative References [73.67707138779245]
We propose a new evaluation metric: SQuArE (Sentence-level QUestion AnsweRing Evaluation)
We evaluate SQuArE on both sentence-level extractive (Answer Selection) and generative (GenQA) QA systems.
arXiv Detail & Related papers (2023-09-21T16:51:30Z) - Reinforced Question Rewriting for Conversational Question Answering [25.555372505026526]
We develop a model to rewrite conversational questions into self-contained ones.
It allows using existing single-turn QA systems to avoid training a CQA model from scratch.
We propose using QA feedback to supervise the rewriting model with reinforcement learning.
arXiv Detail & Related papers (2022-10-27T21:23:36Z) - CONQRR: Conversational Query Rewriting for Retrieval with Reinforcement
Learning [16.470428531658232]
We develop a query rewriting model CONQRR that rewrites a conversational question in context into a standalone question.
We show that CONQRR achieves state-of-the-art results on a recent open-domain CQA dataset.
arXiv Detail & Related papers (2021-12-16T01:40:30Z) - Relation-Guided Pre-Training for Open-Domain Question Answering [67.86958978322188]
We propose a Relation-Guided Pre-Training (RGPT-QA) framework to solve complex open-domain questions.
We show that RGPT-QA achieves 2.2%, 2.4%, and 6.3% absolute improvement in Exact Match accuracy on Natural Questions, TriviaQA, and WebQuestions.
arXiv Detail & Related papers (2021-09-21T17:59:31Z) - Improving Unsupervised Question Answering via Summarization-Informed
Question Generation [47.96911338198302]
Question Generation (QG) is the task of generating a plausible question for a passage, answer> pair.
We make use of freely available news summary data, transforming declarative sentences into appropriate questions using dependency parsing, named entity recognition and semantic role labeling.
The resulting questions are then combined with the original news articles to train an end-to-end neural QG model.
arXiv Detail & Related papers (2021-09-16T13:08:43Z) - KQA Pro: A Dataset with Explicit Compositional Programs for Complex
Question Answering over Knowledge Base [67.87878113432723]
We introduce KQA Pro, a dataset for Complex KBQA including 120K diverse natural language questions.
For each question, we provide the corresponding KoPL program and SPARQL query, so that KQA Pro serves for both KBQA and semantic parsing tasks.
arXiv Detail & Related papers (2020-07-08T03:28:04Z) - Question Rewriting for Conversational Question Answering [15.355557454305776]
We introduce a conversational QA architecture that sets the new state of the art on the TREC CAsT 2019 passage retrieval dataset.
We show that the same QR model improves QA performance on the QuAC dataset with respect to answer span extraction.
Our evaluation results indicate that the QR model achieves near human-level performance on both datasets.
arXiv Detail & Related papers (2020-04-30T09:27:43Z) - Unsupervised Question Decomposition for Question Answering [102.56966847404287]
We propose an algorithm for One-to-N Unsupervised Sequence Sequence (ONUS) that learns to map one hard, multi-hop question to many simpler, single-hop sub-questions.
We show large QA improvements on HotpotQA over a strong baseline on the original, out-of-domain, and multi-hop dev sets.
arXiv Detail & Related papers (2020-02-22T19:40:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.