Moldable Exceptions
- URL: http://arxiv.org/abs/2409.00465v1
- Date: Sat, 31 Aug 2024 14:14:22 GMT
- Title: Moldable Exceptions
- Authors: Andrei Chiş, Tudor Gîrba, Oscar Nierstrasz,
- Abstract summary: We introduce "moldable exceptions", a lightweight mechanism to adapt a debugger's interface based on contextual information.
We present, through a series of examples, how moldable exceptions can enhance a live programming environment.
- Score: 0.840358257755792
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Debugging is hard. Interactive debuggers are mostly the same. They show you a stack, a way to sample the state of the stack, and, if the debugger is live, a way to step through execution. The standard interactive debugger for a general-purpose programming language provided by a mainstream IDE mostly offers a low-level interface in terms of generic language constructs to track down and fix bugs. A custom debugger, such as those developed for specific application domains, offers alternative interfaces more suitable to the specific execution context of the program being debugged. Custom debuggers offering contextual debugging views and actions can greatly improve our ability to reason about the current problem. Implementing such custom debuggers, however, is non-trivial, and poses a barrier to improving the debugging experience. In this paper we introduce "moldable exceptions", a lightweight mechanism to adapt a debugger's interface based on contextual information provided by a raised exception. We present, through a series of examples, how moldable exceptions can enhance a live programming environment.
Related papers
- MdEval: Massively Multilingual Code Debugging [37.48700033342978]
We propose the first massively multilingual debug benchmark, which includes 3.6K test samples of 18 programming languages.
We introduce the instruction corpora MDEVAL-INSTRUCT by injecting bugs into the correct multilingual queries and solutions.
Our experiments on MDEVAL reveal a notable performance gap between open-source models and closed-source LLMs.
arXiv Detail & Related papers (2024-11-04T17:36:40Z) - A Proposal for a Debugging Learning Support Environment for Undergraduate Students Majoring in Computer Science [0.0]
Students do not know how to use a debugger or have never used one.
We implemented a function in Scratch that allows for self-learning of correct breakpoint placement.
arXiv Detail & Related papers (2024-07-25T03:34:19Z) - VDebugger: Harnessing Execution Feedback for Debugging Visual Programs [103.61860743476933]
We introduce V Debugger, a critic-refiner framework trained to localize and debug visual programs by tracking execution step by step.
V Debugger identifies and corrects program errors leveraging detailed execution feedback, improving interpretability and accuracy.
Evaluations on six datasets demonstrate V Debugger's effectiveness, showing performance improvements of up to 3.2% in downstream task accuracy.
arXiv Detail & Related papers (2024-06-19T11:09:16Z) - Accurate Coverage Metrics for Compiler-Generated Debugging Information [0.0]
Many tools rely on compiler-produced metadata to present a source-language view of program states.
Current approaches for measuring the extent of coverage of local variables are based on crude assumptions.
We propose some new metrics, computable by our tools, which could serve as motivation for language implementations to improve debug quality.
arXiv Detail & Related papers (2024-02-07T13:01:28Z) - Leveraging Print Debugging to Improve Code Generation in Large Language
Models [63.63160583432348]
Large language models (LLMs) have made significant progress in code generation tasks.
But their performance in tackling programming problems with complex data structures and algorithms remains suboptimal.
We propose an in-context learning approach that guides LLMs to debug by using a "print debug" method.
arXiv Detail & Related papers (2024-01-10T18:37:59Z) - DebugBench: Evaluating Debugging Capability of Large Language Models [80.73121177868357]
DebugBench is a benchmark for Large Language Models (LLMs)
It covers four major bug categories and 18 minor types in C++, Java, and Python.
We evaluate two commercial and four open-source models in a zero-shot scenario.
arXiv Detail & Related papers (2024-01-09T15:46:38Z) - NuzzleBug: Debugging Block-Based Programs in Scratch [11.182625995483862]
NuzzleBug is an extension of the popular block-based programming environment Scratch.
It is an interrogative debugger that enables to ask questions about executions and provides answers.
We find that teachers consider NuzzleBug to be useful, and children can use it to debug faulty programs effectively.
arXiv Detail & Related papers (2023-09-25T18:56:26Z) - A Static Evaluation of Code Completion by Large Language Models [65.18008807383816]
Execution-based benchmarks have been proposed to evaluate functional correctness of model-generated code on simple programming problems.
static analysis tools such as linters, which can detect errors without running the program, haven't been well explored for evaluating code generation models.
We propose a static evaluation framework to quantify static errors in Python code completions, by leveraging Abstract Syntax Trees.
arXiv Detail & Related papers (2023-06-05T19:23:34Z) - Teaching Large Language Models to Self-Debug [62.424077000154945]
Large language models (LLMs) have achieved impressive performance on code generation.
We propose Self- Debugging, which teaches a large language model to debug its predicted program via few-shot demonstrations.
arXiv Detail & Related papers (2023-04-11T10:43:43Z) - Using Developer Discussions to Guide Fixing Bugs in Software [51.00904399653609]
We propose using bug report discussions, which are available before the task is performed and are also naturally occurring, avoiding the need for additional information from developers.
We demonstrate that various forms of natural language context derived from such discussions can aid bug-fixing, even leading to improved performance over using commit messages corresponding to the oracle bug-fixing commits.
arXiv Detail & Related papers (2022-11-11T16:37:33Z) - Eye: Program Visualizer for CS2 [1.319058156672392]
Eye is an interactive tool that visualizes a program's execution as it runs.
It demonstrates properties and usage of data structures in a general environment.
Eye opens up a gateway for CS2 students to more easily understand myriads of programs that are available on online programming websites.
arXiv Detail & Related papers (2021-01-28T16:16:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.