Guiding the Growth: Difficulty-Controllable Question Generation through
Step-by-Step Rewriting
- URL: http://arxiv.org/abs/2105.11698v1
- Date: Tue, 25 May 2021 06:43:13 GMT
- Title: Guiding the Growth: Difficulty-Controllable Question Generation through
Step-by-Step Rewriting
- Authors: Yi Cheng, Siyao Li, Bang Liu, Ruihui Zhao, Sujian Li, Chenghua Lin and
Yefeng Zheng
- Abstract summary: We argue that Question Generation (QG) systems should have stronger control over the logic of generated questions.
We propose a novel framework that progressively increases question difficulty through step-by-step rewriting.
- Score: 30.722526598633912
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper explores the task of Difficulty-Controllable Question Generation
(DCQG), which aims at generating questions with required difficulty levels.
Previous research on this task mainly defines the difficulty of a question as
whether it can be correctly answered by a Question Answering (QA) system,
lacking interpretability and controllability. In our work, we redefine question
difficulty as the number of inference steps required to answer it and argue
that Question Generation (QG) systems should have stronger control over the
logic of generated questions. To this end, we propose a novel framework that
progressively increases question difficulty through step-by-step rewriting
under the guidance of an extracted reasoning chain. A dataset is automatically
constructed to facilitate the research, on which extensive experiments are
conducted to test the performance of our method.
Related papers
- Automatic question generation for propositional logical equivalences [6.221146613622175]
We develop and implement a method capable of generating tailored questions for each student.
Previous studies have investigated AQG frameworks in education, which include validity, user-defined difficulty, and personalized problem generation.
Our new AQG approach produces logical equivalence problems for Discrete Mathematics, which is a core course for year-one computer science students.
arXiv Detail & Related papers (2024-05-09T02:44:42Z) - Explainable Multi-hop Question Generation: An End-to-End Approach without Intermediate Question Labeling [6.635572580071933]
Multi-hop question generation aims to generate complex questions that requires multi-step reasoning over several documents.
Previous studies have predominantly utilized end-to-end models, wherein questions are decoded based on the representation of context documents.
This paper introduces an end-to-end question rewriting model that increases question complexity through sequential rewriting.
arXiv Detail & Related papers (2024-03-31T06:03:54Z) - Qsnail: A Questionnaire Dataset for Sequential Question Generation [76.616068047362]
We present the first dataset specifically constructed for the questionnaire generation task, which comprises 13,168 human-written questionnaires.
We conduct experiments on Qsnail, and the results reveal that retrieval models and traditional generative models do not fully align with the given research topic and intents.
Despite enhancements through the chain-of-thought prompt and finetuning, questionnaires generated by language models still fall short of human-written questionnaires.
arXiv Detail & Related papers (2024-02-22T04:14:10Z) - On the Robustness of Question Rewriting Systems to Questions of Varying
Hardness [43.63930447922717]
We are interested in the robustness of a QR system to questions varying in rewriting hardness or difficulty.
We first propose a method to automatically classify questions into subsets of varying hardness, by measuring the discrepancy between a question and its rewrite.
To enhance the robustness of QR systems to questions of varying hardness, we propose a novel learning framework for QR that first trains a QR model independently on each subset of questions of a certain level of hardness, then combines these QR models as one joint model for inference.
arXiv Detail & Related papers (2023-11-12T11:09:30Z) - In-Context Ability Transfer for Question Decomposition in Complex QA [6.745884231594893]
We propose icat (In-Context Ability Transfer) to solve complex question-answering tasks.
We transfer the ability to decompose complex questions to simpler questions or generate step-by-step rationales to LLMs.
We conduct large-scale experiments on a variety of complex QA tasks involving numerical reasoning, compositional complex QA, and heterogeneous complex QA.
arXiv Detail & Related papers (2023-10-26T11:11:07Z) - Improving Question Generation with Multi-level Content Planning [70.37285816596527]
This paper addresses the problem of generating questions from a given context and an answer, specifically focusing on questions that require multi-hop reasoning across an extended context.
We propose MultiFactor, a novel QG framework based on multi-level content planning. Specifically, MultiFactor includes two components: FA-model, which simultaneously selects key phrases and generates full answers, and Q-model which takes the generated full answer as an additional input to generate questions.
arXiv Detail & Related papers (2023-10-20T13:57:01Z) - FOLLOWUPQG: Towards Information-Seeking Follow-up Question Generation [38.78216651059955]
We introduce the task of real-world information-seeking follow-up question generation (FQG)
We construct FOLLOWUPQG, a dataset of over 3K real-world (initial question, answer, follow-up question)s collected from a forum layman providing Reddit-friendly explanations for open-ended questions.
In contrast to existing datasets, questions in FOLLOWUPQG use more diverse pragmatic strategies to seek information, and they also show higher-order cognitive skills.
arXiv Detail & Related papers (2023-09-10T11:58:29Z) - Successive Prompting for Decomposing Complex Questions [50.00659445976735]
Recent works leverage the capabilities of large language models (LMs) to perform complex question answering in a few-shot setting.
We introduce Successive Prompting'', where we iteratively break down a complex task into a simple task, solve it, and then repeat the process until we get the final solution.
Our best model (with successive prompting) achieves an improvement of 5% absolute F1 on a few-shot version of the DROP dataset.
arXiv Detail & Related papers (2022-12-08T06:03:38Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z) - Tell Me How to Ask Again: Question Data Augmentation with Controllable
Rewriting in Continuous Space [94.8320535537798]
Controllable Rewriting based Question Data Augmentation (CRQDA) for machine reading comprehension (MRC), question generation, and question-answering natural language inference tasks.
We treat the question data augmentation task as a constrained question rewriting problem to generate context-relevant, high-quality, and diverse question data samples.
arXiv Detail & Related papers (2020-10-04T03:13:46Z) - Reinforced Multi-task Approach for Multi-hop Question Generation [47.15108724294234]
We take up Multi-hop question generation, which aims at generating relevant questions based on supporting facts in the context.
We employ multitask learning with the auxiliary task of answer-aware supporting fact prediction to guide the question generator.
We demonstrate the effectiveness of our approach through experiments on the multi-hop question answering dataset, HotPotQA.
arXiv Detail & Related papers (2020-04-05T10:16:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.