Using Implicit Feedback to Improve Question Generation
- URL: http://arxiv.org/abs/2304.13664v1
- Date: Wed, 26 Apr 2023 16:37:47 GMT
- Title: Using Implicit Feedback to Improve Question Generation
- Authors: Hugo Rodrigues, Eric Nyberg, Luisa Coheur
- Abstract summary: Question Generation (QG) is a task of Natural Language Processing (NLP) that aims at automatically generating questions from text.
In this work, we present a system, GEN, that learns from such (implicit) feedback.
Results show that GEN is able to improve by learning from both levels of implicit feedback when compared to the version with no learning.
- Score: 4.4250613854221905
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Question Generation (QG) is a task of Natural Language Processing (NLP) that
aims at automatically generating questions from text. Many applications can
benefit from automatically generated questions, but often it is necessary to
curate those questions, either by selecting or editing them. This task is
informative on its own, but it is typically done post-generation, and, thus,
the effort is wasted. In addition, most existing systems cannot incorporate
this feedback back into them easily. In this work, we present a system, GEN,
that learns from such (implicit) feedback. Following a pattern-based approach,
it takes as input a small set of sentence/question pairs and creates patterns
which are then applied to new unseen sentences. Each generated question, after
being corrected by the user, is used as a new seed in the next iteration, so
more patterns are created each time. We also take advantage of the corrections
made by the user to score the patterns and therefore rank the generated
questions. Results show that GEN is able to improve by learning from both
levels of implicit feedback when compared to the version with no learning,
considering the top 5, 10, and 20 questions. Improvements go up from 10%,
depending on the metric and strategy used.
Related papers
- Diversity Enhanced Narrative Question Generation for Storybooks [4.043005183192124]
We introduce a multi-question generation model (mQG) capable of generating multiple, diverse, and answerable questions.
To validate the answerability of the generated questions, we employ a SQuAD2.0 fine-tuned question answering model.
mQG shows promising results across various evaluation metrics, among strong baselines.
arXiv Detail & Related papers (2023-10-25T08:10:04Z) - Improving Question Generation with Multi-level Content Planning [70.37285816596527]
This paper addresses the problem of generating questions from a given context and an answer, specifically focusing on questions that require multi-hop reasoning across an extended context.
We propose MultiFactor, a novel QG framework based on multi-level content planning. Specifically, MultiFactor includes two components: FA-model, which simultaneously selects key phrases and generates full answers, and Q-model which takes the generated full answer as an additional input to generate questions.
arXiv Detail & Related papers (2023-10-20T13:57:01Z) - Answering Ambiguous Questions with a Database of Questions, Answers, and
Revisions [95.92276099234344]
We present a new state-of-the-art for answering ambiguous questions that exploits a database of unambiguous questions generated from Wikipedia.
Our method improves performance by 15% on recall measures and 10% on measures which evaluate disambiguating questions from predicted outputs.
arXiv Detail & Related papers (2023-08-16T20:23:16Z) - SkillQG: Learning to Generate Question for Reading Comprehension
Assessment [54.48031346496593]
We present a question generation framework with controllable comprehension types for assessing and improving machine reading comprehension models.
We first frame the comprehension type of questions based on a hierarchical skill-based schema, then formulate $textttSkillQG$ as a skill-conditioned question generator.
Empirical results demonstrate that $textttSkillQG$ outperforms baselines in terms of quality, relevance, and skill-controllability.
arXiv Detail & Related papers (2023-05-08T14:40:48Z) - Answer ranking in Community Question Answering: a deep learning approach [0.0]
This work tries to advance the state of the art on answer ranking for community Question Answering by proceeding with a deep learning approach.
We created a large data set of questions and answers posted to the Stack Overflow website.
We leveraged the natural language processing capabilities of dense embeddings and LSTM networks to produce a prediction for the accepted answer attribute.
arXiv Detail & Related papers (2022-10-16T18:47:41Z) - Automatic question generation based on sentence structure analysis using
machine learning approach [0.0]
This article introduces our framework for generating factual questions from unstructured text in the English language.
It uses a combination of traditional linguistic approaches based on sentence patterns with several machine learning methods.
The framework also includes a question evaluation module which estimates the quality of generated questions.
arXiv Detail & Related papers (2022-05-25T14:35:29Z) - Quiz Design Task: Helping Teachers Create Quizzes with Automated
Question Generation [87.34509878569916]
This paper focuses on the use case of helping teachers automate the generation of reading comprehension quizzes.
In our study, teachers building a quiz receive question suggestions, which they can either accept or refuse with a reason.
arXiv Detail & Related papers (2022-05-03T18:59:03Z) - Exploring Question-Specific Rewards for Generating Deep Questions [42.243227323241584]
We design three different rewards that target to improve the fluency, relevance, and answerability of generated questions.
We find that optimizing question-specific rewards generally leads to better performance in automatic evaluation metrics.
arXiv Detail & Related papers (2020-11-02T16:37:30Z) - Few-Shot Complex Knowledge Base Question Answering via Meta
Reinforcement Learning [55.08037694027792]
Complex question-answering (CQA) involves answering complex natural-language questions on a knowledge base (KB)
The conventional neural program induction (NPI) approach exhibits uneven performance when the questions have different types.
This paper proposes a meta-reinforcement learning approach to program induction in CQA to tackle the potential distributional bias in questions.
arXiv Detail & Related papers (2020-10-29T18:34:55Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z) - Sequence-to-Sequence Learning for Indonesian Automatic Question
Generator [0.0]
We construct an Indonesian automatic question generator, adapting the architecture from some previous works.
The system achieved BLEU1, BLEU2, BLEU3, BLEU4, and ROUGE-L score at 38,35, 20,96, 10,68, 5,78, and 43,4 for SQuAD, and 39.9, 20.78, 10.26, 6.31, 44.13 for TyDiQA.
arXiv Detail & Related papers (2020-09-29T09:25:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.