Few-shot Question Generation for Personalized Feedback in Intelligent
Tutoring Systems
- URL: http://arxiv.org/abs/2206.04187v1
- Date: Wed, 8 Jun 2022 22:59:23 GMT
- Title: Few-shot Question Generation for Personalized Feedback in Intelligent
Tutoring Systems
- Authors: Devang Kulshreshtha, Muhammad Shayan, Robert Belfer, Siva Reddy,
Iulian Vlad Serban, Ekaterina Kochmar
- Abstract summary: We show that our personalized corrective feedback system has the potential to improve Generative Question Answering systems.
Our model vastly outperforms both simple and strong baselines in terms of student learning gains by 45% and 23% respectively when tested in a real dialogue-based ITS.
- Score: 22.167776818471026
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing work on generating hints in Intelligent Tutoring Systems (ITS)
focuses mostly on manual and non-personalized feedback. In this work, we
explore automatically generated questions as personalized feedback in an ITS.
Our personalized feedback can pinpoint correct and incorrect or missing phrases
in student answers as well as guide them towards correct answer by asking a
question in natural language. Our approach combines cause-effect analysis to
break down student answers using text similarity-based NLP Transformer models
to identify correct and incorrect or missing parts. We train a few-shot Neural
Question Generation and Question Re-ranking models to show questions addressing
components missing in the student answers which steers students towards the
correct answer. Our model vastly outperforms both simple and strong baselines
in terms of student learning gains by 45% and 23% respectively when tested in a
real dialogue-based ITS. Finally, we show that our personalized corrective
feedback system has the potential to improve Generative Question Answering
systems.
Related papers
- Stepwise Verification and Remediation of Student Reasoning Errors with Large Language Model Tutors [78.53699244846285]
Large language models (LLMs) present an opportunity to scale high-quality personalized education to all.
LLMs struggle to precisely detect student's errors and tailor their feedback to these errors.
Inspired by real-world teaching practice where teachers identify student errors and customize their response based on them, we focus on verifying student solutions.
arXiv Detail & Related papers (2024-07-12T10:11:40Z) - Student Answer Forecasting: Transformer-Driven Answer Choice Prediction for Language Learning [2.8887520199545187]
Recent research has primarily focused on the correctness of the answer rather than the student's performance on specific answer choices.
We present MCQStudentBert, an answer forecasting model that integrates contextual understanding of students' answering history along with the text of the questions and answers.
This work opens the door to more personalized content, modularization, and granular support.
arXiv Detail & Related papers (2024-05-30T14:09:43Z) - Improving the Validity of Automatically Generated Feedback via
Reinforcement Learning [50.067342343957876]
We propose a framework for feedback generation that optimize both correctness and alignment using reinforcement learning (RL)
Specifically, we use GPT-4's annotations to create preferences over feedback pairs in an augmented dataset for training via direct preference optimization (DPO)
arXiv Detail & Related papers (2024-03-02T20:25:50Z) - Continually Improving Extractive QA via Human Feedback [59.49549491725224]
We study continually improving an extractive question answering (QA) system via human user feedback.
We conduct experiments involving thousands of user interactions under diverse setups to broaden the understanding of learning from feedback over time.
arXiv Detail & Related papers (2023-05-21T14:35:32Z) - Question Personalization in an Intelligent Tutoring System [5.644357169513361]
We show that generating versions of the questions suitable for students at different levels of subject proficiency improves student learning gains.
This insight demonstrates that the linguistic realization of questions in an ITS affects the learning outcomes for students.
arXiv Detail & Related papers (2022-05-25T15:23:51Z) - Towards Teachable Reasoning Systems [29.59387051046722]
We develop a teachable reasoning system for question-answering (QA)
Our approach is three-fold: First, generated chains of reasoning show how answers are implied by the system's own internal beliefs.
Second, users can interact with the explanations to identify erroneous model beliefs and provide corrections.
Third, we augment the model with a dynamic memory of such corrections.
arXiv Detail & Related papers (2022-04-27T17:15:07Z) - Simulating Bandit Learning from User Feedback for Extractive Question
Answering [51.97943858898579]
We study learning from user feedback for extractive question answering by simulating feedback using supervised data.
We show that systems initially trained on a small number of examples can dramatically improve given feedback from users on model-predicted answers.
arXiv Detail & Related papers (2022-03-18T17:47:58Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Deep Discourse Analysis for Generating Personalized Feedback in
Intelligent Tutor Systems [4.716555240531893]
We explore creating automated, personalized feedback in an intelligent tutoring system (ITS)
Our goal is to pinpoint correct and incorrect concepts in student answers in order to achieve better student learning gains.
arXiv Detail & Related papers (2021-03-13T20:33:10Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z) - Automated Personalized Feedback Improves Learning Gains in an
Intelligent Tutoring System [34.19909376464836]
We investigate how automated, data-driven, personalized feedback in a large-scale intelligent tutoring system (ITS) improves student learning outcomes.
We propose a machine learning approach to generate personalized feedback, which takes individual needs of students into account.
We utilize state-of-the-art machine learning and natural language processing techniques to provide the students with personalized hints, Wikipedia-based explanations, and mathematical hints.
arXiv Detail & Related papers (2020-05-05T18:30:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.