How Helpful do Novice Programmers Find the Feedback of an Automated
Repair Tool?
- URL: http://arxiv.org/abs/2310.00954v2
- Date: Sat, 7 Oct 2023 13:49:07 GMT
- Title: How Helpful do Novice Programmers Find the Feedback of an Automated
Repair Tool?
- Authors: Oka Kurniawan, Christopher M. Poskitt, Ismam Al Hoque, Norman Tiong
Seng Lee, Cyrille J\'egourel, Nachamma Sockalingam
- Abstract summary: We describe our experience of using CLARA, an automated repair tool, to provide feedback to novices.
First, we extended CLARA to support a larger subset of the Python language, before integrating it with the Jupyter Notebooks used for our programming exercises.
We found that novices often struggled to understand the proposed repairs, echoing the well-known challenge to understand compiler/interpreter messages.
- Score: 1.2990666399718034
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Immediate feedback has been shown to improve student learning. In programming
courses, immediate, automated feedback is typically provided in the form of
pre-defined test cases run by a submission platform. While these are excellent
for highlighting the presence of logical errors, they do not provide novice
programmers enough scaffolding to help them identify where an error is or how
to fix it. To address this, several tools have been developed that provide
richer feedback in the form of program repairs. Studies of such tools, however,
tend to focus more on whether correct repairs can be generated, rather than how
novices are using them. In this paper, we describe our experience of using
CLARA, an automated repair tool, to provide feedback to novices. First, we
extended CLARA to support a larger subset of the Python language, before
integrating it with the Jupyter Notebooks used for our programming exercises.
Second, we devised a preliminary study in which students tackled programming
problems with and without support of the tool using the 'think aloud' protocol.
We found that novices often struggled to understand the proposed repairs,
echoing the well-known challenge to understand compiler/interpreter messages.
Furthermore, we found that students valued being told where a fix was needed -
without necessarily the fix itself - suggesting that 'less may be more' from a
pedagogical perspective.
Related papers
- A Novel Approach for Automatic Program Repair using Round-Trip
Translation with Large Language Models [50.86686630756207]
Research shows that grammatical mistakes in a sentence can be corrected by translating it to another language and back.
Current generative models for Automatic Program Repair (APR) are pre-trained on source code and fine-tuned for repair.
This paper proposes bypassing the fine-tuning step and using Round-Trip Translation (RTT): translation of code from one programming language to another programming or natural language, and back.
arXiv Detail & Related papers (2024-01-15T22:36:31Z) - Flexible Control Flow Graph Alignment for Delivering Data-Driven
Feedback to Novice Programming Learners [0.847136673632881]
We present several modifications to CLARA, a data-driven automated repair approach that is open source.
We extend CLARA's abstract syntax tree processor to handle common introductory programming constructs.
We modify an incorrect program's control flow graph to match the correct programs to apply CLARA's original repair process.
arXiv Detail & Related papers (2024-01-02T19:56:50Z) - Dcc --help: Generating Context-Aware Compiler Error Explanations with
Large Language Models [53.04357141450459]
dcc --help was deployed to our CS1 and CS2 courses, with 2,565 students using the tool over 64,000 times in ten weeks.
We found that the LLM-generated explanations were conceptually accurate in 90% of compile-time and 75% of run-time cases, but often disregarded the instruction not to provide solutions in code.
arXiv Detail & Related papers (2023-08-23T02:36:19Z) - A large language model-assisted education tool to provide feedback on
open-ended responses [2.624902795082451]
We present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions.
Our tool delivers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement.
arXiv Detail & Related papers (2023-07-25T19:49:55Z) - Generating High-Precision Feedback for Programming Syntax Errors using
Large Language Models [23.25258654890813]
Large language models (LLMs) hold great promise in enhancing programming education by automatically generating feedback for students.
We introduce PyFiXV, our technique to generate high-precision feedback powered by Codex.
arXiv Detail & Related papers (2023-01-24T13:00:25Z) - Giving Feedback on Interactive Student Programs with Meta-Exploration [74.5597783609281]
Developing interactive software, such as websites or games, is a particularly engaging way to learn computer science.
Standard approaches require instructors to manually grade student-implemented interactive programs.
Online platforms that serve millions, like Code.org, are unable to provide any feedback on assignments for implementing interactive programs.
arXiv Detail & Related papers (2022-11-16T10:00:23Z) - Repairing Bugs in Python Assignments Using Large Language Models [9.973714032271708]
We propose to use a large language model trained on code to build an APR system for programming assignments.
Our system can fix both syntactic and semantic mistakes by combining multi-modal prompts, iterative querying, test-case-based selection of few-shots, and program chunking.
We evaluate MMAPR on 286 real student programs and compare to a baseline built by combining a state-of-the-art Python syntax repair engine, BIFI, and state-of-the-art Python semantic repair engine for student assignments, Refactory.
arXiv Detail & Related papers (2022-09-29T15:41:17Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Break-It-Fix-It: Unsupervised Learning for Program Repair [90.55497679266442]
We propose a new training approach, Break-It-Fix-It (BIFI), which has two key ideas.
We use the critic to check a fixer's output on real bad inputs and add good (fixed) outputs to the training data.
Based on these ideas, we iteratively update the breaker and the fixer while using them in conjunction to generate more paired data.
BIFI outperforms existing methods, obtaining 90.5% repair accuracy on GitHub-Python and 71.7% on DeepFix.
arXiv Detail & Related papers (2021-06-11T20:31:04Z) - SYNFIX: Automatically Fixing Syntax Errors using Compiler Diagnostics [0.0]
Students could be helped, and instructors' time saved, by automated repair suggestions when dealing with syntax errors.
We introduce SYNFIX, a machine-learning based tool that substantially improves on the state-of-the-art.
We have built SYNFIX into a free, open-source version of Visual Studio Code; we make all our source code and models freely available.
arXiv Detail & Related papers (2021-04-29T21:57:44Z) - Graph-based, Self-Supervised Program Repair from Diagnostic Feedback [108.48853808418725]
We introduce a program-feedback graph, which connects symbols relevant to program repair in source code and diagnostic feedback.
We then apply a graph neural network on top to model the reasoning process.
We present a self-supervised learning paradigm for program repair that leverages unlabeled programs available online.
arXiv Detail & Related papers (2020-05-20T07:24:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.