Why Stop at One Error? Benchmarking LLMs as Data Science Code Debuggers for Multi-Hop and Multi-Bug Errors
- URL: http://arxiv.org/abs/2503.22388v2
- Date: Sat, 17 May 2025 16:56:07 GMT
- Title: Why Stop at One Error? Benchmarking LLMs as Data Science Code Debuggers for Multi-Hop and Multi-Bug Errors
- Authors: Zhiyu Yang, Shuo Wang, Yukun Yan, Yang Deng,
- Abstract summary: We introduce the Data Science Benchmark, the first benchmark for systematic evaluation of LLMs on multi-hop error tracing and multi-bug detection.<n>DSDBench includes 1,117 annotated samples with 741 cause-effect error pairs and runtime error messages.<n> Evaluations of state-of-the-art LLMs on DSDBench show significant performance gaps.
- Score: 13.332407319448803
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: LLMs are transforming software development, yet current code generation and code repair benchmarks mainly assess syntactic and functional correctness in simple, single-error cases. LLMs' capabilities to autonomously find and fix runtime logical errors in complex data science code remain largely unexplored. To address this gap, we introduce DSDBench: the Data Science Debugging Benchmark, the first benchmark for systematic evaluation of LLMs on multi-hop error tracing and multi-bug detection in data science code debugging. DSDBench adapts datasets from existing data science task benchmarks, such as DABench and MatPlotBench, featuring realistic data science debugging tasks with automatically synthesized multi-hop, multi-bug code snippets. DSDBench includes 1,117 annotated samples with 741 cause-effect error pairs and runtime error messages. Evaluations of state-of-the-art LLMs on DSDBench show significant performance gaps, highlighting challenges in debugging logical runtime errors in data science code. DSDBench offers a crucial resource to evaluate and improve LLMs' debugging and reasoning capabilities, enabling more reliable AI-assisted data science in the future. DSDBench is publicly available at github.com/KevinCL16/DSDBench.
Related papers
- DSCodeBench: A Realistic Benchmark for Data Science Code Generation [16.227266086218425]
DSCodeBench is a new benchmark designed to evaluate large language models (LLMs) on complicated and realistic data science code generation tasks.<n>It consists of 1,000 carefully constructed problems sourced from GitHub across ten widely used Python data science libraries.<n>Compared to the current state-of-the-art benchmark DS-1000, DSCodeBench offers a more challenging and representative testbed.
arXiv Detail & Related papers (2025-05-21T15:11:26Z) - OpenCodeInstruct: A Large-scale Instruction Tuning Dataset for Code LLMs [62.68905180014956]
We introduce OpenCodeInstruct, the largest open-access instruction tuning dataset, comprising 5 million diverse samples.
Each sample includes a programming question, solution, test cases, execution feedback, and LLM-generated quality assessments.
We fine-tune various base models, including LLaMA and Qwen, across multiple scales (1B+, 3B+, and 7B+) using our dataset.
arXiv Detail & Related papers (2025-04-05T02:52:16Z) - SpecTool: A Benchmark for Characterizing Errors in Tool-Use LLMs [77.79172008184415]
SpecTool is a new benchmark to identify error patterns in LLM output on tool-use tasks.
We show that even the most prominent LLMs exhibit these error patterns in their outputs.
Researchers can use the analysis and insights from SPECTOOL to guide their error mitigation strategies.
arXiv Detail & Related papers (2024-11-20T18:56:22Z) - BabelBench: An Omni Benchmark for Code-Driven Analysis of Multimodal and Multistructured Data [61.936320820180875]
Large language models (LLMs) have become increasingly pivotal across various domains.
BabelBench is an innovative benchmark framework that evaluates the proficiency of LLMs in managing multimodal multistructured data with code execution.
Our experimental findings on BabelBench indicate that even cutting-edge models like ChatGPT 4 exhibit substantial room for improvement.
arXiv Detail & Related papers (2024-10-01T15:11:24Z) - Fixing Function-Level Code Generation Errors for Foundation Large Language Models [6.137340149146578]
We conduct an empirical study on the generation errors and conduct an analysis of their causes, leading to 19 categories of error causes.
Our empirical analysis indicated that three of these causes can be directly fixed.
We propose a fixing method called LlmFix, which addresses these three types of errors through a three-step process.
arXiv Detail & Related papers (2024-09-01T09:40:15Z) - COAST: Enhancing the Code Debugging Ability of LLMs through Communicative Agent Based Data Synthesis [29.667170755786508]
We introduce EVAL, a benchmark for evaluating the abilities of Large Language Models.<n>We propose the COmmunicative Agent-based data SynThesis framework, which employs a multi-agent system to generate high-quality training data.<n>Results demonstrate that COAST-generated data outperform human-curated and GPT-4-generated data.
arXiv Detail & Related papers (2024-08-09T11:35:44Z) - What's Wrong with Your Code Generated by Large Language Models? An Extensive Study [80.18342600996601]
Large language models (LLMs) produce code that is shorter yet more complicated as compared to canonical solutions.
We develop a taxonomy of bugs for incorrect codes that includes three categories and 12 sub-categories, and analyze the root cause for common bug types.
We propose a novel training-free iterative method that introduces self-critique, enabling LLMs to critique and correct their generated code based on bug types and compiler feedback.
arXiv Detail & Related papers (2024-07-08T17:27:17Z) - DARG: Dynamic Evaluation of Large Language Models via Adaptive Reasoning Graph [70.79413606968814]
We introduce Dynamic Evaluation of LLMs via Adaptive Reasoning Graph Evolvement (DARG) to dynamically extend current benchmarks with controlled complexity and diversity.
Specifically, we first extract the reasoning graphs of data points in current benchmarks and then perturb the reasoning graphs to generate novel testing data.
Such newly generated test samples can have different levels of complexity while maintaining linguistic diversity similar to the original benchmarks.
arXiv Detail & Related papers (2024-06-25T04:27:53Z) - MEIC: Re-thinking RTL Debug Automation using LLMs [18.964523115622928]
This work introduces a novel framework, Make Each Iteration Count (MEIC)
MEIC is suitable for identifying and correcting both syntax and function errors.
To evaluate our framework, we provide an open-source dataset comprising 178 common RTL programming errors.
arXiv Detail & Related papers (2024-05-10T22:32:39Z) - LDB: A Large Language Model Debugger via Verifying Runtime Execution Step-by-step [35.76881887942524]
Large language models (LLMs) are leading significant progress in code generation.
In this study, we introduce Large Language Model Debugger (LDB)
LDB segments the programs into basic blocks and tracks the values of intermediate variables after each block throughout the runtime execution.
arXiv Detail & Related papers (2024-02-25T00:56:27Z) - DebugBench: Evaluating Debugging Capability of Large Language Models [80.73121177868357]
DebugBench is a benchmark for Large Language Models (LLMs)
It covers four major bug categories and 18 minor types in C++, Java, and Python.
We evaluate two commercial and four open-source models in a zero-shot scenario.
arXiv Detail & Related papers (2024-01-09T15:46:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.