An Investigation Into Secondary School Students' Debugging Behaviour in Python
- URL: http://arxiv.org/abs/2508.14833v1
- Date: Wed, 20 Aug 2025 16:34:23 GMT
- Title: An Investigation Into Secondary School Students' Debugging Behaviour in Python
- Authors: Laurie Gale, Sue Sentance,
- Abstract summary: This paper investigates the debug behaviour of K-12 students learning a text-based programming language.<n>A range of behaviours were exhibited by students, skewed towards being ineffective.<n>We argue that students struggling to debug possess fragile knowledge, a lens through which we view the results.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Background and context: Debugging is a common and often frustrating challenge for beginner programmers. Understanding students' debugging processes can help us identify the difficulties and misunderstandings they possess. However, we currently have limited knowledge of how secondary students debug in a text-based language, a medium through which millions of students will learn to program in the future. Objectives: In this paper, we investigate the debugging behaviour of K-12 students learning a text-based programming language, as part of an effort to shape how to effectively teach debugging to these students. Method: We collected log data from 73 students attempting a set of debugging exercises using an online code editor. We inductively analysed these logs using qualitative content analysis, generating a categorisation of the debugging behaviours observed. Findings: A range of behaviours were exhibited by students, skewed towards being ineffective. Most students were able to partially locate errors but often struggled to resolve them, sometimes introducing additional errors in the process. We argue that students struggling to debug possess fragile knowledge, a lens through which we view the results. Implications: This paper highlights some of the difficulties K-12 learners have when debugging in a text-based programming language. We argue, like much related work, that effective debugging strategies should be explicitly taught, while ineffective strategies should be discouraged.
Related papers
- Modeling Student Learning with 3.8 Million Program Traces [52.153493498021895]
We introduce a dataset of over 3.8 million programming reasoning traces from users of Pencil Code.<n>We find that models trained on real traces are stronger at modeling diverse student behavior.<n>We show that we can help students recover from mistakes by steering code generation models to identify a sequence of edits that will result in more correct code.
arXiv Detail & Related papers (2025-10-06T17:37:17Z) - NL-Debugging: Exploiting Natural Language as an Intermediate Representation for Code Debugging [68.42255321759062]
Recent advancements in large language models (LLMs) have shifted attention toward leveraging natural language reasoning to enhance code-related tasks.<n>In this paper, we introduce NL-GING, a novel framework that employs natural language as an intermediate representation to improve code.
arXiv Detail & Related papers (2025-05-21T10:38:50Z) - Learning Code-Edit Embedding to Model Student Debugging Behavior [2.1485350418225244]
We propose an encoder-decoder-based model that learns meaningful code-edit embeddings between consecutive student code submissions.<n>It enables personalized next-step code suggestions that maintain the student's coding style while improving test case correctness.
arXiv Detail & Related papers (2025-02-26T18:54:39Z) - Simulated Interactive Debugging [7.3742419367796535]
We present our approach called Simulated Interactive that interactively guides students along the debug process.<n>The guidance aims to empower the students to repair their solutions and have a proper learning experience.<n>We developed an implementation using traditional fault localization techniques and large language models.
arXiv Detail & Related papers (2025-01-16T17:47:18Z) - BugSpotter: Automated Generation of Code Debugging Exercises [22.204802715829615]
This paper introduces BugSpotter, a tool to generate buggy code from a problem description and verify the synthesized bugs via a test suite.
Students interact with BugSpotter by designing failing test cases, where the buggy code's output differs from the expected result as defined by the problem specification.
arXiv Detail & Related papers (2024-11-21T16:56:33Z) - A Proposal for a Debugging Learning Support Environment for Undergraduate Students Majoring in Computer Science [0.0]
Students do not know how to use a debugger or have never used one.
We implemented a function in Scratch that allows for self-learning of correct breakpoint placement.
arXiv Detail & Related papers (2024-07-25T03:34:19Z) - Leveraging Print Debugging to Improve Code Generation in Large Language
Models [63.63160583432348]
Large language models (LLMs) have made significant progress in code generation tasks.
But their performance in tackling programming problems with complex data structures and algorithms remains suboptimal.
We propose an in-context learning approach that guides LLMs to debug by using a "print debug" method.
arXiv Detail & Related papers (2024-01-10T18:37:59Z) - NuzzleBug: Debugging Block-Based Programs in Scratch [11.182625995483862]
NuzzleBug is an extension of the popular block-based programming environment Scratch.
It is an interrogative debugger that enables to ask questions about executions and provides answers.
We find that teachers consider NuzzleBug to be useful, and children can use it to debug faulty programs effectively.
arXiv Detail & Related papers (2023-09-25T18:56:26Z) - Teaching Large Language Models to Self-Debug [62.424077000154945]
Large language models (LLMs) have achieved impressive performance on code generation.
We propose Self- Debugging, which teaches a large language model to debug its predicted program via few-shot demonstrations.
arXiv Detail & Related papers (2023-04-11T10:43:43Z) - Giving Feedback on Interactive Student Programs with Meta-Exploration [74.5597783609281]
Developing interactive software, such as websites or games, is a particularly engaging way to learn computer science.
Standard approaches require instructors to manually grade student-implemented interactive programs.
Online platforms that serve millions, like Code.org, are unable to provide any feedback on assignments for implementing interactive programs.
arXiv Detail & Related papers (2022-11-16T10:00:23Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - On the Robustness of Language Encoders against Grammatical Errors [66.05648604987479]
We collect real grammatical errors from non-native speakers and conduct adversarial attacks to simulate these errors on clean text data.
Results confirm that the performance of all tested models is affected but the degree of impact varies.
arXiv Detail & Related papers (2020-05-12T11:01:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.