Prompt-Engineering and Transformer-based Question Generation and
Evaluation
- URL: http://arxiv.org/abs/2310.18867v1
- Date: Sun, 29 Oct 2023 01:45:30 GMT
- Title: Prompt-Engineering and Transformer-based Question Generation and
Evaluation
- Authors: Rubaba Amyeen
- Abstract summary: This paper aims to find the best method to generate questions from textual data through a transformer model and prompt engineering.
The generated questions were compared against the baseline questions in the SQuAD dataset to evaluate the effectiveness of four different prompts.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Question generation has numerous applications in the educational context.
Question generation can prove helpful for students when reviewing content and
testing themselves. Furthermore, a question generation model can aid teachers
by lessening the burden of creating assessments and other practice material.
This paper aims to find the best method to generate questions from textual data
through a transformer model and prompt engineering. In this research, we
finetuned a pretrained distilBERT model on the SQuAD question answering dataset
to generate questions. In addition to training a transformer model, prompt
engineering was applied to generate questions effectively using the LLaMA
model. The generated questions were compared against the baseline questions in
the SQuAD dataset to evaluate the effectiveness of four different prompts. All
four prompts demonstrated over 60% similarity on average. Of the
prompt-generated questions, 30% achieved a high similarity score greater than
70%.
Related papers
- RAG-ConfusionQA: A Benchmark for Evaluating LLMs on Confusing Questions [52.33835101586687]
Conversational AI agents use Retrieval Augmented Generation (RAG) to provide verifiable document-grounded responses to user inquiries.
This paper presents a novel synthetic data generation method to efficiently create a diverse set of context-grounded confusing questions from a given document corpus.
arXiv Detail & Related papers (2024-10-18T16:11:29Z) - Diversity Enhanced Narrative Question Generation for Storybooks [4.043005183192124]
We introduce a multi-question generation model (mQG) capable of generating multiple, diverse, and answerable questions.
To validate the answerability of the generated questions, we employ a SQuAD2.0 fine-tuned question answering model.
mQG shows promising results across various evaluation metrics, among strong baselines.
arXiv Detail & Related papers (2023-10-25T08:10:04Z) - An Empirical Comparison of LM-based Question and Answer Generation
Methods [79.31199020420827]
Question and answer generation (QAG) consists of generating a set of question-answer pairs given a context.
In this paper, we establish baselines with three different QAG methodologies that leverage sequence-to-sequence language model (LM) fine-tuning.
Experiments show that an end-to-end QAG model, which is computationally light at both training and inference times, is generally robust and outperforms other more convoluted approaches.
arXiv Detail & Related papers (2023-05-26T14:59:53Z) - Connecting Humanities and Social Sciences: Applying Language and Speech
Technology to Online Panel Surveys [2.0646127669654835]
We explore the application of language and speech technology to open-ended questions in a Dutch panel survey.
In an experimental wave respondents could choose to answer open questions via speech or keyboard.
We report the errors the ASR system produces and investigate the impact of these errors on downstream analyses.
arXiv Detail & Related papers (2023-02-21T10:52:15Z) - Learning to Diversify for Product Question Generation [68.69526529887607]
We show how the T5 pre-trained Transformer encoder-decoder model can be fine-tuned for the task.
We propose a novel learning-to-diversify (LTD) fine-tuning approach that allows to enrich the language learned by the underlying Transformer model.
arXiv Detail & Related papers (2022-07-06T09:26:41Z) - What should I Ask: A Knowledge-driven Approach for Follow-up Questions
Generation in Conversational Surveys [63.51903260461746]
We propose a novel task for knowledge-driven follow-up question generation in conversational surveys.
We constructed a new human-annotated dataset of human-written follow-up questions with dialogue history and labeled knowledge.
We then propose a two-staged knowledge-driven model for the task, which generates informative and coherent follow-up questions.
arXiv Detail & Related papers (2022-05-23T00:57:33Z) - Educational Question Generation of Children Storybooks via Question Type Distribution Learning and Event-Centric Summarization [67.1483219601714]
We propose a novel question generation method that first learns the question type distribution of an input story paragraph.
We finetune a pre-trained transformer-based sequence-to-sequence model using silver samples composed by educational question-answer pairs.
Our work indicates the necessity of decomposing question type distribution learning and event-centric summary generation for educational question generation.
arXiv Detail & Related papers (2022-03-27T02:21:19Z) - Answer Generation for Questions With Multiple Information Sources in
E-Commerce [0.0]
We propose a novel pipeline (MSQAP) that utilizes the rich information present in the aforementioned sources by separately performing relevancy and ambiguity prediction.
This is the first work in the e-commerce domain that automatically generates natural language answers combining the information present in diverse sources such as specifications, similar questions, and reviews data.
arXiv Detail & Related papers (2021-11-27T23:19:49Z) - Asking Questions Like Educational Experts: Automatically Generating
Question-Answer Pairs on Real-World Examination Data [10.353009081072992]
This paper addresses the question-answer pair generation task on the real-world examination data, and proposes a new unified framework on RACE.
We propose a multi-agent communication model to generate and optimize the question and keyphrases iteratively, and then apply the generated question and keyphrases to guide the generation of answers.
Experimental results show that our model makes great breakthroughs in the question-answer pair generation task.
arXiv Detail & Related papers (2021-09-11T04:10:57Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z) - Asking Questions the Human Way: Scalable Question-Answer Generation from
Text Corpus [23.676748207014903]
We propose Answer-Clue-Style-aware Question Generation (ACS-QG)
It aims at automatically generating high-quality and diverse question-answer pairs from unlabeled text corpus at scale.
We can generate 2.8 million quality-assured question-answer pairs from a million sentences found in Wikipedia.
arXiv Detail & Related papers (2020-01-27T05:27:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.