NuzzleBug: Debugging Block-Based Programs in Scratch
- URL: http://arxiv.org/abs/2309.14465v1
- Date: Mon, 25 Sep 2023 18:56:26 GMT
- Title: NuzzleBug: Debugging Block-Based Programs in Scratch
- Authors: Adina Deiner and Gordon Fraser
- Abstract summary: NuzzleBug is an extension of the popular block-based programming environment Scratch.
It is an interrogative debugger that enables to ask questions about executions and provides answers.
We find that teachers consider NuzzleBug to be useful, and children can use it to debug faulty programs effectively.
- Score: 11.182625995483862
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While professional integrated programming environments support developers
with advanced debugging functionality, block-based programming environments for
young learners often provide no support for debugging at all, thus inhibiting
debugging and preventing debugging education. In this paper we introduce
NuzzleBug, an extension of the popular block-based programming environment
Scratch that provides the missing debugging support. NuzzleBug allows
controlling the executions of Scratch programs with classical debugging
functionality such as stepping and breakpoints, and it is an omniscient
debugger that also allows reverse stepping. To support learners in deriving
hypotheses that guide debugging, NuzzleBug is an interrogative debugger that
enables to ask questions about executions and provides answers explaining the
behavior in question. In order to evaluate NuzzleBug, we survey the opinions of
teachers, and study the effects on learners in terms of debugging effectiveness
and efficiency. We find that teachers consider NuzzleBug to be useful, and
children can use it to debug faulty programs effectively. However, systematic
debugging requires dedicated training, and even when NuzzleBug can provide
correct answers learners may require further help to comprehend faults and
necessary fixes, thus calling for further research on improving debugging
techniques and the information they provide.
Related papers
- Moldable Exceptions [0.840358257755792]
We introduce "moldable exceptions", a lightweight mechanism to adapt a debugger's interface based on contextual information.
We present, through a series of examples, how moldable exceptions can enhance a live programming environment.
arXiv Detail & Related papers (2024-08-31T14:14:22Z) - A Proposal for a Debugging Learning Support Environment for Undergraduate Students Majoring in Computer Science [0.0]
Students do not know how to use a debugger or have never used one.
We implemented a function in Scratch that allows for self-learning of correct breakpoint placement.
arXiv Detail & Related papers (2024-07-25T03:34:19Z) - Towards Practical and Useful Automated Program Repair for Debugging [4.216808129651161]
PracAPR is an interactive repair system that works in an Integrated Development Environment (IDE)
PracAPR does not require a test suite or program re-execution.
arXiv Detail & Related papers (2024-07-12T03:19:54Z) - VDebugger: Harnessing Execution Feedback for Debugging Visual Programs [103.61860743476933]
We introduce V Debugger, a critic-refiner framework trained to localize and debug visual programs by tracking execution step by step.
V Debugger identifies and corrects program errors leveraging detailed execution feedback, improving interpretability and accuracy.
Evaluations on six datasets demonstrate V Debugger's effectiveness, showing performance improvements of up to 3.2% in downstream task accuracy.
arXiv Detail & Related papers (2024-06-19T11:09:16Z) - Leveraging Print Debugging to Improve Code Generation in Large Language
Models [63.63160583432348]
Large language models (LLMs) have made significant progress in code generation tasks.
But their performance in tackling programming problems with complex data structures and algorithms remains suboptimal.
We propose an in-context learning approach that guides LLMs to debug by using a "print debug" method.
arXiv Detail & Related papers (2024-01-10T18:37:59Z) - DebugBench: Evaluating Debugging Capability of Large Language Models [80.73121177868357]
DebugBench is a benchmark for Large Language Models (LLMs)
It covers four major bug categories and 18 minor types in C++, Java, and Python.
We evaluate two commercial and four open-source models in a zero-shot scenario.
arXiv Detail & Related papers (2024-01-09T15:46:38Z) - Teaching Large Language Models to Self-Debug [62.424077000154945]
Large language models (LLMs) have achieved impressive performance on code generation.
We propose Self- Debugging, which teaches a large language model to debug its predicted program via few-shot demonstrations.
arXiv Detail & Related papers (2023-04-11T10:43:43Z) - Giving Feedback on Interactive Student Programs with Meta-Exploration [74.5597783609281]
Developing interactive software, such as websites or games, is a particularly engaging way to learn computer science.
Standard approaches require instructors to manually grade student-implemented interactive programs.
Online platforms that serve millions, like Code.org, are unable to provide any feedback on assignments for implementing interactive programs.
arXiv Detail & Related papers (2022-11-16T10:00:23Z) - Explanation-Based Human Debugging of NLP Models: A Survey [15.115037763351003]
We call this problem explanation-based human debug (EBHD)
In particular, we categorize and discuss existing works along three main dimensions of EBHD.
arXiv Detail & Related papers (2021-04-30T17:53:07Z) - Eye: Program Visualizer for CS2 [1.319058156672392]
Eye is an interactive tool that visualizes a program's execution as it runs.
It demonstrates properties and usage of data structures in a general environment.
Eye opens up a gateway for CS2 students to more easily understand myriads of programs that are available on online programming websites.
arXiv Detail & Related papers (2021-01-28T16:16:59Z) - Learning by Fixing: Solving Math Word Problems with Weak Supervision [70.62896781438694]
Previous neural solvers of math word problems (MWPs) are learned with full supervision and fail to generate diverse solutions.
We introduce a textitweakly-supervised paradigm for learning MWPs.
Our method only requires the annotations of the final answers and can generate various solutions for a single problem.
arXiv Detail & Related papers (2020-12-19T03:10:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.