Reinforced Question Rewriting for Conversational Question Answering
- URL: http://arxiv.org/abs/2210.15777v1
- Date: Thu, 27 Oct 2022 21:23:36 GMT
- Title: Reinforced Question Rewriting for Conversational Question Answering
- Authors: Zhiyu Chen, Jie Zhao, Anjie Fang, Besnik Fetahu, Oleg Rokhlenko,
Shervin Malmasi
- Abstract summary: We develop a model to rewrite conversational questions into self-contained ones.
It allows using existing single-turn QA systems to avoid training a CQA model from scratch.
We propose using QA feedback to supervise the rewriting model with reinforcement learning.
- Score: 25.555372505026526
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conversational Question Answering (CQA) aims to answer questions contained
within dialogues, which are not easily interpretable without context.
Developing a model to rewrite conversational questions into self-contained ones
is an emerging solution in industry settings as it allows using existing
single-turn QA systems to avoid training a CQA model from scratch. Previous
work trains rewriting models using human rewrites as supervision. However, such
objectives are disconnected with QA models and therefore more human-like
rewrites do not guarantee better QA performance. In this paper we propose using
QA feedback to supervise the rewriting model with reinforcement learning.
Experiments show that our approach can effectively improve QA performance over
baselines for both extractive and retrieval QA. Furthermore, human evaluation
shows that our method can generate more accurate and detailed rewrites when
compared to human annotations.
Related papers
- SQUARE: Automatic Question Answering Evaluation using Multiple Positive
and Negative References [73.67707138779245]
We propose a new evaluation metric: SQuArE (Sentence-level QUestion AnsweRing Evaluation)
We evaluate SQuArE on both sentence-level extractive (Answer Selection) and generative (GenQA) QA systems.
arXiv Detail & Related papers (2023-09-21T16:51:30Z) - Can Question Rewriting Help Conversational Question Answering? [13.484873786389471]
Question rewriting (QR) is a subtask of conversational question answering (CQA)
We investigate a reinforcement learning approach that integrates QR and CQA tasks and does not require corresponding QR datasets for targeted CQA.
We find, however, that the RL method is on par with the end-to-end baseline.
arXiv Detail & Related papers (2022-04-13T08:16:03Z) - Improving Unsupervised Question Answering via Summarization-Informed
Question Generation [47.96911338198302]
Question Generation (QG) is the task of generating a plausible question for a passage, answer> pair.
We make use of freely available news summary data, transforming declarative sentences into appropriate questions using dependency parsing, named entity recognition and semantic role labeling.
The resulting questions are then combined with the original news articles to train an end-to-end neural QG model.
arXiv Detail & Related papers (2021-09-16T13:08:43Z) - Learn to Resolve Conversational Dependency: A Consistency Training
Framework for Conversational Question Answering [14.382513103948897]
We propose ExCorD (Explicit guidance on how to resolve Conversational Dependency) to enhance the abilities of QA models in comprehending conversational context.
In our experiments, we demonstrate that ExCorD significantly improves the QA models' performance by up to 1.2 F1 on QuAC, and 5.2 F1 on CANARD.
arXiv Detail & Related papers (2021-06-22T07:16:45Z) - Counterfactual Variable Control for Robust and Interpretable Question
Answering [57.25261576239862]
Deep neural network based question answering (QA) models are neither robust nor explainable in many cases.
In this paper, we inspect such spurious "capability" of QA models using causal inference.
We propose a novel approach called Counterfactual Variable Control (CVC) that explicitly mitigates any shortcut correlation.
arXiv Detail & Related papers (2020-10-12T10:09:05Z) - Generating Diverse and Consistent QA pairs from Contexts with
Information-Maximizing Hierarchical Conditional VAEs [62.71505254770827]
We propose a conditional variational autoencoder (HCVAE) for generating QA pairs given unstructured texts as contexts.
Our model obtains impressive performance gains over all baselines on both tasks, using only a fraction of data for training.
arXiv Detail & Related papers (2020-05-28T08:26:06Z) - Harvesting and Refining Question-Answer Pairs for Unsupervised QA [95.9105154311491]
We introduce two approaches to improve unsupervised Question Answering (QA)
First, we harvest lexically and syntactically divergent questions from Wikipedia to automatically construct a corpus of question-answer pairs (named as RefQA)
Second, we take advantage of the QA model to extract more appropriate answers, which iteratively refines data over RefQA.
arXiv Detail & Related papers (2020-05-06T15:56:06Z) - Question Rewriting for Conversational Question Answering [15.355557454305776]
We introduce a conversational QA architecture that sets the new state of the art on the TREC CAsT 2019 passage retrieval dataset.
We show that the same QR model improves QA performance on the QuAC dataset with respect to answer span extraction.
Our evaluation results indicate that the QR model achieves near human-level performance on both datasets.
arXiv Detail & Related papers (2020-04-30T09:27:43Z) - Template-Based Question Generation from Retrieved Sentences for Improved
Unsupervised Question Answering [98.48363619128108]
We propose an unsupervised approach to training QA models with generated pseudo-training data.
We show that generating questions for QA training by applying a simple template on a related, retrieved sentence rather than the original context sentence improves downstream QA performance.
arXiv Detail & Related papers (2020-04-24T17:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.