Unified Question Generation with Continual Lifelong Learning
- URL: http://arxiv.org/abs/2201.09696v1
- Date: Mon, 24 Jan 2022 14:05:18 GMT
- Title: Unified Question Generation with Continual Lifelong Learning
- Authors: Wei Yuan, Hongzhi Yin, Tieke He, Tong Chen, Qiufeng Wang, Lizhen Cui
- Abstract summary: Existing QG methods mainly focus on building or training models for specific QG datasets.
We propose a model named UnifiedQG based on lifelong learning techniques, which can continually learn QG tasks.
In addition, we transform the ability of a single trained Unified-QG model in improving $8$ Question Answering (QA) systems' performance.
- Score: 41.81627903996791
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Question Generation (QG), as a challenging Natural Language Processing task,
aims at generating questions based on given answers and context. Existing QG
methods mainly focus on building or training models for specific QG datasets.
These works are subject to two major limitations: (1) They are dedicated to
specific QG formats (e.g., answer-extraction or multi-choice QG), therefore, if
we want to address a new format of QG, a re-design of the QG model is required.
(2) Optimal performance is only achieved on the dataset they were just trained
on. As a result, we have to train and keep various QG models for different QG
datasets, which is resource-intensive and ungeneralizable.
To solve the problems, we propose a model named Unified-QG based on lifelong
learning techniques, which can continually learn QG tasks across different
datasets and formats. Specifically, we first build a format-convert encoding to
transform different kinds of QG formats into a unified representation. Then, a
method named \emph{STRIDER} (\emph{S}imilari\emph{T}y \emph{R}egular\emph{I}zed
\emph{D}ifficult \emph{E}xample \emph{R}eplay) is built to alleviate
catastrophic forgetting in continual QG learning. Extensive experiments were
conducted on $8$ QG datasets across $4$ QG formats (answer-extraction,
answer-abstraction, multi-choice, and boolean QG) to demonstrate the
effectiveness of our approach. Experimental results demonstrate that our
Unified-QG can effectively and continually adapt to QG tasks when datasets and
formats vary. In addition, we verify the ability of a single trained Unified-QG
model in improving $8$ Question Answering (QA) systems' performance through
generating synthetic QA data.
Related papers
- Graph Guided Question Answer Generation for Procedural
Question-Answering [29.169773816553153]
We introduce a method for generating exhaustive and high-quality training data for task-specific question answering (QA) models.
Key technological enabler is a novel mechanism for automatic question-answer generation from procedural text.
We show that small models trained with our data achieve excellent performance on the target QA task, even exceeding that of GPT3 and ChatGPT.
arXiv Detail & Related papers (2024-01-24T17:01:42Z) - Davidsonian Scene Graph: Improving Reliability in Fine-grained Evaluation for Text-to-Image Generation [64.64849950642619]
We develop an evaluation framework inspired by formal semantics for evaluating text-to-image models.
We show that Davidsonian Scene Graph (DSG) produces atomic and unique questions organized in dependency graphs.
We also present DSG-1k, an open-sourced evaluation benchmark that includes 1,060 prompts.
arXiv Detail & Related papers (2023-10-27T16:20:10Z) - Generative Language Models for Paragraph-Level Question Generation [79.31199020420827]
Powerful generative models have led to recent progress in question generation (QG)
It is difficult to measure advances in QG research since there are no standardized resources that allow a uniform comparison among approaches.
We introduce QG-Bench, a benchmark for QG that unifies existing question answering datasets by converting them to a standard QG setting.
arXiv Detail & Related papers (2022-10-08T10:24:39Z) - QA4QG: Using Question Answering to Constrain Multi-Hop Question
Generation [54.136509061542775]
Multi-hop question generation (MQG) aims to generate complex questions which require reasoning over multiple pieces of information of the input passage.
We propose a novel framework, QA4QG, a QA-augmented BART-based framework for MQG.
Our results on the HotpotQA dataset show that QA4QG outperforms all state-of-the-art models.
arXiv Detail & Related papers (2022-02-14T08:16:47Z) - Improving Unsupervised Question Answering via Summarization-Informed
Question Generation [47.96911338198302]
Question Generation (QG) is the task of generating a plausible question for a passage, answer> pair.
We make use of freely available news summary data, transforming declarative sentences into appropriate questions using dependency parsing, named entity recognition and semantic role labeling.
The resulting questions are then combined with the original news articles to train an end-to-end neural QG model.
arXiv Detail & Related papers (2021-09-16T13:08:43Z) - EQG-RACE: Examination-Type Question Generation [21.17100754955864]
We propose an innovative Examination-type Question Generation approach (EQG-RACE) to generate exam-like questions based on a dataset extracted from RACE.
Two main strategies are employed in EQG-RACE for dealing with discrete answer information and reasoning among long contexts.
Experimental results show a state-of-the-art performance of EQG-RACE, which is apparently superior to the baselines.
arXiv Detail & Related papers (2020-12-11T03:52:17Z) - UnifiedQA: Crossing Format Boundaries With a Single QA System [84.63376743920003]
We argue that such boundaries are artificial and perhaps unnecessary, given the reasoning abilities we seek to teach are not governed by the format.
We build a single pre-trained QA model, UnifiedQA, that performs surprisingly well across 17 QA datasets spanning 4 diverse formats.
arXiv Detail & Related papers (2020-05-02T04:42:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.