A large language model-assisted education tool to provide feedback on
open-ended responses
- URL: http://arxiv.org/abs/2308.02439v1
- Date: Tue, 25 Jul 2023 19:49:55 GMT
- Title: A large language model-assisted education tool to provide feedback on
open-ended responses
- Authors: Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad
P. Kording
- Abstract summary: We present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions.
Our tool delivers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement.
- Score: 2.624902795082451
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies.
Related papers
- Can Large Language Models Replicate ITS Feedback on Open-Ended Math Questions? [3.7399138244928145]
We study the capabilities of large language models to generate feedback for open-ended math questions.
We find that open-source and proprietary models both show promise in replicating the feedback they see during training, but do not generalize well to previously unseen student errors.
arXiv Detail & Related papers (2024-05-10T11:53:53Z) - Mining patterns in syntax trees to automate code reviews of student solutions for programming exercises [0.0]
We introduce ECHO, a machine learning method to automate the reuse of feedback in educational code reviews.
Based on annotations from both automated linting tools and human reviewers, we show that ECHO can accurately and quickly predict appropriate feedback annotations.
arXiv Detail & Related papers (2024-04-26T14:03:19Z) - KIWI: A Dataset of Knowledge-Intensive Writing Instructions for
Answering Research Questions [63.307317584926146]
Large language models (LLMs) adapted to follow user instructions are now widely deployed as conversational agents.
In this work, we examine one increasingly common instruction-following task: providing writing assistance to compose a long-form answer.
We construct KIWI, a dataset of knowledge-intensive writing instructions in the scientific domain.
arXiv Detail & Related papers (2024-03-06T17:16:44Z) - Improving the Validity of Automatically Generated Feedback via
Reinforcement Learning [50.067342343957876]
We propose a framework for feedback generation that optimize both correctness and alignment using reinforcement learning (RL)
Specifically, we use GPT-4's annotations to create preferences over feedback pairs in an augmented dataset for training via direct preference optimization (DPO)
arXiv Detail & Related papers (2024-03-02T20:25:50Z) - Patterns of Student Help-Seeking When Using a Large Language
Model-Powered Programming Assistant [2.5949084781328744]
This study examines students' use of an innovative tool that provides on-demand programming assistance without revealing solutions directly.
We collected more than 2,500 queries submitted by students throughout the term.
We found that most queries requested immediate help with programming assignments, whereas fewer requests asked for help on related concepts or for deepening conceptual understanding.
arXiv Detail & Related papers (2023-10-25T20:36:05Z) - CodeHelp: Using Large Language Models with Guardrails for Scalable
Support in Programming Classes [2.5949084781328744]
Large language models (LLMs) have emerged recently and show great promise for providing on-demand help at a large scale.
We introduce CodeHelp, a novel LLM-powered tool designed with guardrails to provide on-demand assistance to programming students without directly revealing solutions.
Our findings suggest that CodeHelp is well-received by students who especially value its availability and help with resolving errors, and that for instructors it is easy to deploy and complements, rather than replaces, the support that they provide to students.
arXiv Detail & Related papers (2023-08-14T03:52:24Z) - TEMPERA: Test-Time Prompting via Reinforcement Learning [57.48657629588436]
We propose Test-time Prompt Editing using Reinforcement learning (TEMPERA)
In contrast to prior prompt generation methods, TEMPERA can efficiently leverage prior knowledge.
Our method achieves 5.33x on average improvement in sample efficiency when compared to the traditional fine-tuning methods.
arXiv Detail & Related papers (2022-11-21T22:38:20Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Real-Time Cognitive Evaluation of Online Learners through Automatically
Generated Questions [0.0]
The paper presents an approach to generate questions from a given video lecture automatically.
The generated questions are aimed to evaluate learners' lower-level cognitive abilities.
arXiv Detail & Related papers (2021-06-06T05:45:56Z) - Retrieve, Program, Repeat: Complex Knowledge Base Question Answering via
Alternate Meta-learning [56.771557756836906]
We present a novel method that automatically learns a retrieval model alternately with the programmer from weak supervision.
Our system leads to state-of-the-art performance on a large-scale task for complex question answering over knowledge bases.
arXiv Detail & Related papers (2020-10-29T18:28:16Z) - Neural Multi-Task Learning for Teacher Question Detection in Online
Classrooms [50.19997675066203]
We build an end-to-end neural framework that automatically detects questions from teachers' audio recordings.
By incorporating multi-task learning techniques, we are able to strengthen the understanding of semantic relations among different types of questions.
arXiv Detail & Related papers (2020-05-16T02:17:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.