An Automated Multiple-Choice Question Generation Using Natural Language
Processing Techniques
- URL: http://arxiv.org/abs/2103.14757v1
- Date: Fri, 26 Mar 2021 22:39:59 GMT
- Title: An Automated Multiple-Choice Question Generation Using Natural Language
Processing Techniques
- Authors: Chidinma A. Nwafor and Ikechukwu E. Onyenwe
- Abstract summary: We present an NLP-based system for automatic multiple-choice question generation (MCQG) for Computer-Based Testing Examination (CBTE)
We used NLP technique to extract keywords that are important words in a given lesson material.
To validate that the system is not perverse, five lesson materials were used to check the effectiveness and efficiency of the system.
- Score: 0.913755431537592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic multiple-choice question generation (MCQG) is a useful yet
challenging task in Natural Language Processing (NLP). It is the task of
automatic generation of correct and relevant questions from textual data.
Despite its usefulness, manually creating sizeable, meaningful and relevant
questions is a time-consuming and challenging task for teachers. In this paper,
we present an NLP-based system for automatic MCQG for Computer-Based Testing
Examination (CBTE).We used NLP technique to extract keywords that are important
words in a given lesson material. To validate that the system is not perverse,
five lesson materials were used to check the effectiveness and efficiency of
the system. The manually extracted keywords by the teacher were compared to the
auto-generated keywords and the result shows that the system was capable of
extracting keywords from lesson materials in setting examinable questions. This
outcome is presented in a user-friendly interface for easy accessibility.
Related papers
- Knowledge Tagging System on Math Questions via LLMs with Flexible Demonstration Retriever [48.5585921817745]
Large Language Models (LLMs) are used to automate the knowledge tagging task.
We show the strong performance of zero- and few-shot results over math questions knowledge tagging tasks.
By proposing a reinforcement learning-based demonstration retriever, we successfully exploit the great potential of different-sized LLMs.
arXiv Detail & Related papers (2024-06-19T23:30:01Z) - Automated Generation of Multiple-Choice Cloze Questions for Assessing
English Vocabulary Using GPT-turbo 3.5 [5.525336037820985]
We evaluate a new method for automatically generating multiple-choice questions using large language models (LLM)
The VocaTT engine is written in Python and comprises three basic steps: pre-processing target word lists, generating sentences and candidate word options, and finally selecting suitable word options.
Results showed a 75% rate of well-formedness for sentences and 66.85% rate for suitable word options.
arXiv Detail & Related papers (2024-03-04T14:24:47Z) - Learning to Filter Context for Retrieval-Augmented Generation [75.18946584853316]
Generation models are required to generate outputs given partially or entirely irrelevant passages.
FILCO identifies useful context based on lexical and information-theoretic approaches.
It trains context filtering models that can filter retrieved contexts at test time.
arXiv Detail & Related papers (2023-11-14T18:41:54Z) - Answer Candidate Type Selection: Text-to-Text Language Model for Closed
Book Question Answering Meets Knowledge Graphs [62.20354845651949]
We present a novel approach which works on top of the pre-trained Text-to-Text QA system to address this issue.
Our simple yet effective method performs filtering and re-ranking of generated candidates based on their types derived from Wikidata "instance_of" property.
arXiv Detail & Related papers (2023-10-10T20:49:43Z) - Automatic Generation of Multiple-Choice Questions [7.310488568715925]
We present two methods to tackle the challenge of QAP generations.
A deep-learning-based end-to-end question generation system based on T5 Transformer with Preprocessing and Postprocessing Pipelines.
A sequence-learning-based scheme to generate adequate QAPs via meta-sequence representations of sentences.
arXiv Detail & Related papers (2023-03-25T22:45:54Z) - Rethinking Label Smoothing on Multi-hop Question Answering [87.68071401870283]
Multi-Hop Question Answering (MHQA) is a significant area in question answering.
In this work, we analyze the primary factors limiting the performance of multi-hop reasoning.
We propose a novel label smoothing technique, F1 Smoothing, which incorporates uncertainty into the learning process.
arXiv Detail & Related papers (2022-12-19T14:48:08Z) - TEMPERA: Test-Time Prompting via Reinforcement Learning [57.48657629588436]
We propose Test-time Prompt Editing using Reinforcement learning (TEMPERA)
In contrast to prior prompt generation methods, TEMPERA can efficiently leverage prior knowledge.
Our method achieves 5.33x on average improvement in sample efficiency when compared to the traditional fine-tuning methods.
arXiv Detail & Related papers (2022-11-21T22:38:20Z) - Tag-Set-Sequence Learning for Generating Question-Answer Pairs [10.48660454637293]
We present a new method called tag-set sequence learning to tackle the problem of generating silly questions for texts.
We construct a system called TSS-Learner to learn tag-set sequences from given declarative sentences and the corresponding interrogative sentences.
We show that TSS-Learner can indeed generate adequate QAPs for certain texts that transformer-based models do poorly.
arXiv Detail & Related papers (2022-10-20T21:51:00Z) - From Human Days to Machine Seconds: Automatically Answering and
Generating Machine Learning Final Exams [10.25071232250652]
A final exam in machine learning at a top institution such as MIT, Harvard, or Cornell typically takes faculty days to write, and students hours to solve.
We demonstrate that large language models pass machine learning finals at a human level, on finals available online after the models were trained, and automatically generate new human-quality final exam questions in seconds.
arXiv Detail & Related papers (2022-06-11T06:38:06Z) - Automatic question generation based on sentence structure analysis using
machine learning approach [0.0]
This article introduces our framework for generating factual questions from unstructured text in the English language.
It uses a combination of traditional linguistic approaches based on sentence patterns with several machine learning methods.
The framework also includes a question evaluation module which estimates the quality of generated questions.
arXiv Detail & Related papers (2022-05-25T14:35:29Z) - Few-Shot Bot: Prompt-Based Learning for Dialogue Systems [58.27337673451943]
Learning to converse using only a few examples is a great challenge in conversational AI.
The current best conversational models are either good chit-chatters (e.g., BlenderBot) or goal-oriented systems (e.g., MinTL)
We propose prompt-based few-shot learning which does not require gradient-based fine-tuning but instead uses a few examples as the only source of learning.
arXiv Detail & Related papers (2021-10-15T14:36:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.