Can Automated Feedback Turn Students into Happy Prologians?
- URL: http://arxiv.org/abs/2504.16742v1
- Date: Wed, 23 Apr 2025 14:11:54 GMT
- Title: Can Automated Feedback Turn Students into Happy Prologians?
- Authors: Ricardo Brancas, Pedro Orvalho, Carolina Carreira, Vasco Manquinho, Ruben Martins,
- Abstract summary: In ProHelp, we implemented several types of automated feedback adapted to different languages and programming paradigms.<n>We surveyed students to find if the feedback was useful and which types of feedback they preferred.<n>Results show that students found all types of feedback helpful, with automatic testing, in particular, being the most helpful type.
- Score: 0.9087641068861047
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Giving personalized feedback to students is very important to the learning process. However, doing so in a timely manner can be difficult to accomplish in very large courses. Recent work has explored different types of automated feedback adapted to different languages and programming paradigms, particularly logic programming. In ProHelp, we implemented several of these types of feedback so that they could be used by students enrolled in a logic programming class. Then, we surveyed those students to find if the feedback was useful and which types of feedback they preferred. Results show that students found all types of feedback helpful, with automatic testing, in particular, being the most helpful type. We also explore student preferences for which types of feedback they would most like to see implemented in the future.
Related papers
- You're (Not) My Type -- Can LLMs Generate Feedback of Specific Types for Introductory Programming Tasks? [0.4779196219827508]
This paper aims to generate specific types of feedback for programming tasks using Large Language Models (LLMs)
We revisit existing feedback to capture the specifics of the generated feedback, such as randomness, uncertainty, and degrees of variation.
Results have implications for future feedback research with regard to, for example, feedback effects and learners' informational needs.
arXiv Detail & Related papers (2024-12-04T17:57:39Z) - Generating Feedback-Ladders for Logical Errors in Programming using Large Language Models [2.1485350418225244]
Large language model (LLM)-based methods have shown great promise in feedback generation for programming assignments.
This paper explores using LLMs to generate a "feedback-ladder", i.e., multiple levels of feedback for the same problem-submission pair.
We evaluate the quality of the generated feedback-ladder via a user study with students, educators, and researchers.
arXiv Detail & Related papers (2024-05-01T03:52:39Z) - Improving the Validity of Automatically Generated Feedback via Reinforcement Learning [46.667783153759636]
We propose a framework for feedback generation that optimize both correctness and alignment using reinforcement learning (RL)<n>Specifically, we use GPT-4's annotations to create preferences over feedback pairs in an augmented dataset for training via direct preference optimization (DPO)
arXiv Detail & Related papers (2024-03-02T20:25:50Z) - Students' Perceptions and Preferences of Generative Artificial
Intelligence Feedback for Programming [15.372316943507506]
We generated automated feedback using the ChatGPT API for four lab assignments in an introductory computer science class.
Students perceived the feedback as aligning well with formative feedback guidelines established by Shute.
Students generally expected specific and corrective feedback with sufficient code examples, but had diverged opinions on the tone of the feedback.
arXiv Detail & Related papers (2023-12-17T22:26:53Z) - System-Level Natural Language Feedback [83.24259100437965]
We show how to use feedback to formalize system-level design decisions in a human-in-the-loop-process.
We conduct two case studies of this approach for improving search query and dialog response generation.
We show the combination of system-level and instance-level feedback brings further gains.
arXiv Detail & Related papers (2023-06-23T16:21:40Z) - TEMPERA: Test-Time Prompting via Reinforcement Learning [57.48657629588436]
We propose Test-time Prompt Editing using Reinforcement learning (TEMPERA)
In contrast to prior prompt generation methods, TEMPERA can efficiently leverage prior knowledge.
Our method achieves 5.33x on average improvement in sample efficiency when compared to the traditional fine-tuning methods.
arXiv Detail & Related papers (2022-11-21T22:38:20Z) - Giving Feedback on Interactive Student Programs with Meta-Exploration [74.5597783609281]
Developing interactive software, such as websites or games, is a particularly engaging way to learn computer science.
Standard approaches require instructors to manually grade student-implemented interactive programs.
Online platforms that serve millions, like Code.org, are unable to provide any feedback on assignments for implementing interactive programs.
arXiv Detail & Related papers (2022-11-16T10:00:23Z) - Simulating Bandit Learning from User Feedback for Extractive Question
Answering [51.97943858898579]
We study learning from user feedback for extractive question answering by simulating feedback using supervised data.
We show that systems initially trained on a small number of examples can dramatically improve given feedback from users on model-predicted answers.
arXiv Detail & Related papers (2022-03-18T17:47:58Z) - Feedback and Engagement on an Introductory Programming Module [0.0]
We ran a study on engagement and achievement for a first year undergraduate programming module which used an online learning environment containing tasks which generate automated feedback.
We gathered quantitative data on engagement and achievement which allowed us to split the cohort into 6 groups.
We then ran interviews with students after the end of the module to produce qualitative data on perceptions of what feedback is, how useful it is, the uses made of it, and how it bears on engagement.
arXiv Detail & Related papers (2022-01-04T16:53:09Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Effects of Human vs. Automatic Feedback on Students' Understanding of AI
Concepts and Programming Style [0.0]
The use of automatic grading tools has become nearly ubiquitous in large undergraduate programming courses.
There is a relative lack of data directly comparing student outcomes when receiving computer-generated feedback and human-written feedback.
This paper addresses this gap by splitting one 90-student class into two feedback groups and analyzing differences in the two cohorts' performance.
arXiv Detail & Related papers (2020-11-20T21:40:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.