Fluent Response Generation for Conversational Question Answering
- URL: http://arxiv.org/abs/2005.10464v2
- Date: Thu, 17 Dec 2020 03:56:09 GMT
- Title: Fluent Response Generation for Conversational Question Answering
- Authors: Ashutosh Baheti, Alan Ritter, Kevin Small
- Abstract summary: We propose a method for situating responses within a SEQ2SEQ NLG approach to generate fluent grammatical answer responses.
We use data augmentation to generate training data for an end-to-end system.
- Score: 15.826109118064716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Question answering (QA) is an important aspect of open-domain conversational
agents, garnering specific research focus in the conversational QA (ConvQA)
subtask. One notable limitation of recent ConvQA efforts is the response being
answer span extraction from the target corpus, thus ignoring the natural
language generation (NLG) aspect of high-quality conversational agents. In this
work, we propose a method for situating QA responses within a SEQ2SEQ NLG
approach to generate fluent grammatical answer responses while maintaining
correctness. From a technical perspective, we use data augmentation to generate
training data for an end-to-end system. Specifically, we develop Syntactic
Transformations (STs) to produce question-specific candidate answer responses
and rank them using a BERT-based classifier (Devlin et al., 2019). Human
evaluation on SQuAD 2.0 data (Rajpurkar et al., 2018) demonstrate that the
proposed model outperforms baseline CoQA and QuAC models in generating
conversational responses. We further show our model's scalability by conducting
tests on the CoQA dataset. The code and data are available at
https://github.com/abaheti95/QADialogSystem.
Related papers
- GSQA: An End-to-End Model for Generative Spoken Question Answering [54.418723701886115]
We introduce the first end-to-end Generative Spoken Question Answering (GSQA) model that empowers the system to engage in abstractive reasoning.
Our model surpasses the previous extractive model by 3% on extractive QA datasets.
Our GSQA model shows the potential to generalize to a broad spectrum of questions, thus further expanding the spoken question answering capabilities of abstractive QA.
arXiv Detail & Related papers (2023-12-15T13:33:18Z) - PAXQA: Generating Cross-lingual Question Answering Examples at Training
Scale [53.92008514395125]
PAXQA (Projecting annotations for cross-lingual (x) QA) decomposes cross-lingual QA into two stages.
We propose a novel use of lexically-constrained machine translation, in which constrained entities are extracted from the parallel bitexts.
We show that models fine-tuned on these datasets outperform prior synthetic data generation models over several extractive QA datasets.
arXiv Detail & Related papers (2023-04-24T15:46:26Z) - Knowledge Transfer from Answer Ranking to Answer Generation [97.38378660163414]
We propose to train a GenQA model by transferring knowledge from a trained AS2 model.
We also propose to use the AS2 model prediction scores for loss weighting and score-conditioned input/output shaping.
arXiv Detail & Related papers (2022-10-23T21:51:27Z) - DUAL: Textless Spoken Question Answering with Speech Discrete Unit
Adaptive Learning [66.71308154398176]
Spoken Question Answering (SQA) has gained research attention and made remarkable progress in recent years.
Existing SQA methods rely on Automatic Speech Recognition (ASR) transcripts, which are time and cost-prohibitive to collect.
This work proposes an ASR transcript-free SQA framework named Discrete Unit Adaptive Learning (DUAL), which leverages unlabeled data for pre-training and is fine-tuned by the SQA downstream task.
arXiv Detail & Related papers (2022-03-09T17:46:22Z) - Improving Unsupervised Question Answering via Summarization-Informed
Question Generation [47.96911338198302]
Question Generation (QG) is the task of generating a plausible question for a passage, answer> pair.
We make use of freely available news summary data, transforming declarative sentences into appropriate questions using dependency parsing, named entity recognition and semantic role labeling.
The resulting questions are then combined with the original news articles to train an end-to-end neural QG model.
arXiv Detail & Related papers (2021-09-16T13:08:43Z) - Harvesting and Refining Question-Answer Pairs for Unsupervised QA [95.9105154311491]
We introduce two approaches to improve unsupervised Question Answering (QA)
First, we harvest lexically and syntactically divergent questions from Wikipedia to automatically construct a corpus of question-answer pairs (named as RefQA)
Second, we take advantage of the QA model to extract more appropriate answers, which iteratively refines data over RefQA.
arXiv Detail & Related papers (2020-05-06T15:56:06Z) - Question Rewriting for Conversational Question Answering [15.355557454305776]
We introduce a conversational QA architecture that sets the new state of the art on the TREC CAsT 2019 passage retrieval dataset.
We show that the same QR model improves QA performance on the QuAC dataset with respect to answer span extraction.
Our evaluation results indicate that the QR model achieves near human-level performance on both datasets.
arXiv Detail & Related papers (2020-04-30T09:27:43Z) - Template-Based Question Generation from Retrieved Sentences for Improved
Unsupervised Question Answering [98.48363619128108]
We propose an unsupervised approach to training QA models with generated pseudo-training data.
We show that generating questions for QA training by applying a simple template on a related, retrieved sentence rather than the original context sentence improves downstream QA performance.
arXiv Detail & Related papers (2020-04-24T17:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.