MIO: Multiverse Debugging in the Face of Input/Output -- Extended Version with Additional Appendices
- URL: http://arxiv.org/abs/2509.06845v1
- Date: Mon, 08 Sep 2025 16:15:18 GMT
- Title: MIO: Multiverse Debugging in the Face of Input/Output -- Extended Version with Additional Appendices
- Authors: Tom Lauwaerts, Maarten Steevens, Christophe Scholliers,
- Abstract summary: We present a novel approach to multiverse debug, which can accommodate a broad spectrum of input/output operations.<n>We have developed a prototype, called MIO, leveraging the WARDuino WebAssembly virtual machine to demonstrate the feasibility and efficiency of our techniques.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Debugging non-deterministic programs on microcontrollers is notoriously challenging, especially when bugs manifest in unpredictable, input-dependent execution paths. A recent approach, called multiverse debugging, makes it easier to debug non-deterministic programs by allowing programmers to explore all potential execution paths. Current multiverse debuggers enable both forward and backward traversal of program paths, and some facilitate jumping to any previously visited states, potentially branching into alternative execution paths within the state space. Unfortunately, debugging programs that involve input/output operations using existing multiverse debuggers can reveal inaccessible program states, i.e. states which are not encountered during regular execution. This can significantly hinder the debugging process, as the programmer may spend substantial time exploring and examining inaccessible program states, or worse, may mistakenly assume a bug is present in the code, when in fact, the issue is caused by the debugger. This paper presents a novel approach to multiverse debugging, which can accommodate a broad spectrum of input/output operations. We provide the semantics of our approach and prove the correctness of our debugger, ensuring that despite having support for a wide range of input/output operations the debugger will only explore those program states which can be reached during regular execution. We have developed a prototype, called MIO, leveraging the WARDuino WebAssembly virtual machine to demonstrate the feasibility and efficiency of our techniques. As a demonstration of the approach we highlight a color dial built with a Lego Mindstorms motor, and color sensor, providing a tangible example of how our approach enables multiverse debugging for programs running on an STM32 microcontroller.
Related papers
- InspectCoder: Dynamic Analysis-Enabled Self Repair through interactive LLM-Debugger Collaboration [71.18377595277018]
Large Language Models (LLMs) frequently generate buggy code with complex logic errors that are challenging to diagnose.<n>We present InspectCoder, the first agentic program repair system that empowers LLMs to actively conduct dynamic analysis via interactive debugger control.
arXiv Detail & Related papers (2025-10-21T06:26:29Z) - Poster: libdebug, Build Your Own Debugger for a Better (Hello) World [0.6990493129893112]
lib is a Python library for programmatic debug of userland binary executables.<n>It is released as an open-source project, along with comprehensive documentation to encourage use and collaboration across the community.<n>We find that the median latency of syscall and breakpoint handling in lib is 3 to 4 times lower compared to that of GDB.
arXiv Detail & Related papers (2025-06-03T09:14:57Z) - NL-Debugging: Exploiting Natural Language as an Intermediate Representation for Code Debugging [68.42255321759062]
Recent advancements in large language models (LLMs) have shifted attention toward leveraging natural language reasoning to enhance code-related tasks.<n>In this paper, we introduce NL-GING, a novel framework that employs natural language as an intermediate representation to improve code.
arXiv Detail & Related papers (2025-05-21T10:38:50Z) - Moldable Exceptions [0.840358257755792]
We introduce "moldable exceptions", a lightweight mechanism to adapt a debugger's interface based on contextual information.
We present, through a series of examples, how moldable exceptions can enhance a live programming environment.
arXiv Detail & Related papers (2024-08-31T14:14:22Z) - A Proposal for a Debugging Learning Support Environment for Undergraduate Students Majoring in Computer Science [0.0]
Students do not know how to use a debugger or have never used one.
We implemented a function in Scratch that allows for self-learning of correct breakpoint placement.
arXiv Detail & Related papers (2024-07-25T03:34:19Z) - VDebugger: Harnessing Execution Feedback for Debugging Visual Programs [103.61860743476933]
We introduce V Debugger, a critic-refiner framework trained to localize and debug visual programs by tracking execution step by step.
V Debugger identifies and corrects program errors leveraging detailed execution feedback, improving interpretability and accuracy.
Evaluations on six datasets demonstrate V Debugger's effectiveness, showing performance improvements of up to 3.2% in downstream task accuracy.
arXiv Detail & Related papers (2024-06-19T11:09:16Z) - Leveraging Print Debugging to Improve Code Generation in Large Language
Models [63.63160583432348]
Large language models (LLMs) have made significant progress in code generation tasks.
But their performance in tackling programming problems with complex data structures and algorithms remains suboptimal.
We propose an in-context learning approach that guides LLMs to debug by using a "print debug" method.
arXiv Detail & Related papers (2024-01-10T18:37:59Z) - NuzzleBug: Debugging Block-Based Programs in Scratch [11.182625995483862]
NuzzleBug is an extension of the popular block-based programming environment Scratch.
It is an interrogative debugger that enables to ask questions about executions and provides answers.
We find that teachers consider NuzzleBug to be useful, and children can use it to debug faulty programs effectively.
arXiv Detail & Related papers (2023-09-25T18:56:26Z) - Teaching Large Language Models to Self-Debug [62.424077000154945]
Large language models (LLMs) have achieved impressive performance on code generation.
We propose Self- Debugging, which teaches a large language model to debug its predicted program via few-shot demonstrations.
arXiv Detail & Related papers (2023-04-11T10:43:43Z) - Fault-Aware Neural Code Rankers [64.41888054066861]
We propose fault-aware neural code rankers that can predict the correctness of a sampled program without executing it.
Our fault-aware rankers can significantly increase the pass@1 accuracy of various code generation models.
arXiv Detail & Related papers (2022-06-04T22:01:05Z) - Static Prediction of Runtime Errors by Learning to Execute Programs with
External Resource Descriptions [31.46148643917194]
We introduce a real-world dataset and task for predicting runtime errors.
We develop an interpreter-inspired architecture with an inductive bias towards mimicking program executions.
We show that the model can also predict the location of the error, despite being trained only on labels indicating the presence/absence and kind of error.
arXiv Detail & Related papers (2022-03-07T23:17:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.