Improving Unsupervised Question Answering via Summarization-Informed
Question Generation
- URL: http://arxiv.org/abs/2109.07954v1
- Date: Thu, 16 Sep 2021 13:08:43 GMT
- Title: Improving Unsupervised Question Answering via Summarization-Informed
Question Generation
- Authors: Chenyang Lyu, Lifeng Shang, Yvette Graham, Jennifer Foster, Xin Jiang,
Qun Liu
- Abstract summary: Question Generation (QG) is the task of generating a plausible question for a passage, answer> pair.
We make use of freely available news summary data, transforming declarative sentences into appropriate questions using dependency parsing, named entity recognition and semantic role labeling.
The resulting questions are then combined with the original news articles to train an end-to-end neural QG model.
- Score: 47.96911338198302
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Question Generation (QG) is the task of generating a plausible question for a
given <passage, answer> pair. Template-based QG uses linguistically-informed
heuristics to transform declarative sentences into interrogatives, whereas
supervised QG uses existing Question Answering (QA) datasets to train a system
to generate a question given a passage and an answer. A disadvantage of the
heuristic approach is that the generated questions are heavily tied to their
declarative counterparts. A disadvantage of the supervised approach is that
they are heavily tied to the domain/language of the QA dataset used as training
data. In order to overcome these shortcomings, we propose an unsupervised QG
method which uses questions generated heuristically from summaries as a source
of training data for a QG system. We make use of freely available news summary
data, transforming declarative summary sentences into appropriate questions
using heuristics informed by dependency parsing, named entity recognition and
semantic role labeling. The resulting questions are then combined with the
original news articles to train an end-to-end neural QG model. We extrinsically
evaluate our approach using unsupervised QA: our QG model is used to generate
synthetic QA pairs for training a QA model. Experimental results show that,
trained with only 20k English Wikipedia-based synthetic QA pairs, the QA model
substantially outperforms previous unsupervised models on three in-domain
datasets (SQuAD1.1, Natural Questions, TriviaQA) and three out-of-domain
datasets (NewsQA, BioASQ, DuoRC), demonstrating the transferability of the
approach.
Related papers
- Diversity Enhanced Narrative Question Generation for Storybooks [4.043005183192124]
We introduce a multi-question generation model (mQG) capable of generating multiple, diverse, and answerable questions.
To validate the answerability of the generated questions, we employ a SQuAD2.0 fine-tuned question answering model.
mQG shows promising results across various evaluation metrics, among strong baselines.
arXiv Detail & Related papers (2023-10-25T08:10:04Z) - QASnowball: An Iterative Bootstrapping Framework for High-Quality
Question-Answering Data Generation [67.27999343730224]
We introduce an iterative bootstrapping framework for QA data augmentation (named QASnowball)
QASnowball can iteratively generate large-scale high-quality QA data based on a seed set of supervised examples.
We conduct experiments in the high-resource English scenario and the medium-resource Chinese scenario, and the experimental results show that the data generated by QASnowball can facilitate QA models.
arXiv Detail & Related papers (2023-09-19T05:20:36Z) - Event Extraction as Question Generation and Answering [72.04433206754489]
Recent work on Event Extraction has reframed the task as Question Answering (QA)
We propose QGA-EE, which enables a Question Generation (QG) model to generate questions that incorporate rich contextual information instead of using fixed templates.
Experiments show that QGA-EE outperforms all prior single-task-based models on the ACE05 English dataset.
arXiv Detail & Related papers (2023-07-10T01:46:15Z) - An Empirical Comparison of LM-based Question and Answer Generation
Methods [79.31199020420827]
Question and answer generation (QAG) consists of generating a set of question-answer pairs given a context.
In this paper, we establish baselines with three different QAG methodologies that leverage sequence-to-sequence language model (LM) fine-tuning.
Experiments show that an end-to-end QAG model, which is computationally light at both training and inference times, is generally robust and outperforms other more convoluted approaches.
arXiv Detail & Related papers (2023-05-26T14:59:53Z) - PIE-QG: Paraphrased Information Extraction for Unsupervised Question
Generation from Small Corpora [4.721845865189576]
PIE-QG uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages.
Triples in the form of subject, predicate, object> are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers.
arXiv Detail & Related papers (2023-01-03T12:20:51Z) - Generating Self-Contained and Summary-Centric Question Answer Pairs via
Differentiable Reward Imitation Learning [7.2745835227138045]
We propose a model for generating question-answer pairs (QA pairs) with self-contained, summary-centric questions and length-constrained, article-summarizing answers.
This dataset is used to learn a QA pair generation model producing summaries as answers that balance brevity with sufficiency jointly with their corresponding questions.
arXiv Detail & Related papers (2021-09-10T06:34:55Z) - Summary-Oriented Question Generation for Informational Queries [23.72999724312676]
We aim to produce self-explanatory questions that focus on main document topics and are answerable with variable length passages as appropriate.
Our model shows SOTA performance of SQ generation on the NQ dataset (20.1 BLEU-4).
We further apply our model on out-of-domain news articles, evaluating with a QA system due to the lack of gold questions and demonstrate that our model produces better SQs for news articles -- with further confirmation via a human evaluation.
arXiv Detail & Related papers (2020-10-19T17:30:08Z) - Harvesting and Refining Question-Answer Pairs for Unsupervised QA [95.9105154311491]
We introduce two approaches to improve unsupervised Question Answering (QA)
First, we harvest lexically and syntactically divergent questions from Wikipedia to automatically construct a corpus of question-answer pairs (named as RefQA)
Second, we take advantage of the QA model to extract more appropriate answers, which iteratively refines data over RefQA.
arXiv Detail & Related papers (2020-05-06T15:56:06Z) - Template-Based Question Generation from Retrieved Sentences for Improved
Unsupervised Question Answering [98.48363619128108]
We propose an unsupervised approach to training QA models with generated pseudo-training data.
We show that generating questions for QA training by applying a simple template on a related, retrieved sentence rather than the original context sentence improves downstream QA performance.
arXiv Detail & Related papers (2020-04-24T17:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.