Automated Personalized Feedback Improves Learning Gains in an
Intelligent Tutoring System
- URL: http://arxiv.org/abs/2005.02431v2
- Date: Thu, 7 May 2020 18:18:54 GMT
- Title: Automated Personalized Feedback Improves Learning Gains in an
Intelligent Tutoring System
- Authors: Ekaterina Kochmar, Dung Do Vu, Robert Belfer, Varun Gupta, Iulian Vlad
Serban, and Joelle Pineau
- Abstract summary: We investigate how automated, data-driven, personalized feedback in a large-scale intelligent tutoring system (ITS) improves student learning outcomes.
We propose a machine learning approach to generate personalized feedback, which takes individual needs of students into account.
We utilize state-of-the-art machine learning and natural language processing techniques to provide the students with personalized hints, Wikipedia-based explanations, and mathematical hints.
- Score: 34.19909376464836
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate how automated, data-driven, personalized feedback in a
large-scale intelligent tutoring system (ITS) improves student learning
outcomes. We propose a machine learning approach to generate personalized
feedback, which takes individual needs of students into account. We utilize
state-of-the-art machine learning and natural language processing techniques to
provide the students with personalized hints, Wikipedia-based explanations, and
mathematical hints. Our model is used in Korbit, a large-scale dialogue-based
ITS with thousands of students launched in 2019, and we demonstrate that the
personalized feedback leads to considerable improvement in student learning
outcomes and in the subjective evaluation of the feedback.
Related papers
- Personalised Feedback Framework for Online Education Programmes Using Generative AI [0.0]
This paper presents an alternative feedback framework which extends the capabilities of ChatGPT by integrating embeddings.
As part of the study, we proposed and developed a proof of concept solution, achieving an efficacy rate of 90% and 100% for open-ended and multiple-choice questions.
arXiv Detail & Related papers (2024-10-14T22:35:40Z) - Improving the Validity of Automatically Generated Feedback via
Reinforcement Learning [50.067342343957876]
We propose a framework for feedback generation that optimize both correctness and alignment using reinforcement learning (RL)
Specifically, we use GPT-4's annotations to create preferences over feedback pairs in an augmented dataset for training via direct preference optimization (DPO)
arXiv Detail & Related papers (2024-03-02T20:25:50Z) - Lessons Learned from Designing an Open-Source Automated Feedback System
for STEM Education [5.326069675013602]
We present RATsApp, an open-source automated feedback system (AFS) that incorporates research-based features such as formative feedback.
The system focuses on core STEM competencies such as mathematical competence, representational competence, and data literacy.
As an open-source platform, RATsApp encourages public contributions to its ongoing development, fostering a collaborative approach to improve educational tools.
arXiv Detail & Related papers (2024-01-19T07:13:07Z) - Empowering Private Tutoring by Chaining Large Language Models [87.76985829144834]
This work explores the development of a full-fledged intelligent tutoring system powered by state-of-the-art large language models (LLMs)
The system is into three inter-connected core processes-interaction, reflection, and reaction.
Each process is implemented by chaining LLM-powered tools along with dynamically updated memory modules.
arXiv Detail & Related papers (2023-09-15T02:42:03Z) - PapagAI:Automated Feedback for Reflective Essays [48.4434976446053]
We present the first open-source automated feedback tool based on didactic theory and implemented as a hybrid AI system.
The main objective of our work is to enable better learning outcomes for students and to complement the teaching activities of lecturers.
arXiv Detail & Related papers (2023-07-10T11:05:51Z) - Few-shot Question Generation for Personalized Feedback in Intelligent
Tutoring Systems [22.167776818471026]
We show that our personalized corrective feedback system has the potential to improve Generative Question Answering systems.
Our model vastly outperforms both simple and strong baselines in terms of student learning gains by 45% and 23% respectively when tested in a real dialogue-based ITS.
arXiv Detail & Related papers (2022-06-08T22:59:23Z) - Simulating Bandit Learning from User Feedback for Extractive Question
Answering [51.97943858898579]
We study learning from user feedback for extractive question answering by simulating feedback using supervised data.
We show that systems initially trained on a small number of examples can dramatically improve given feedback from users on model-predicted answers.
arXiv Detail & Related papers (2022-03-18T17:47:58Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Deep Discourse Analysis for Generating Personalized Feedback in
Intelligent Tutor Systems [4.716555240531893]
We explore creating automated, personalized feedback in an intelligent tutoring system (ITS)
Our goal is to pinpoint correct and incorrect concepts in student answers in order to achieve better student learning gains.
arXiv Detail & Related papers (2021-03-13T20:33:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.