Checker Bug Detection and Repair in Deep Learning Libraries
- URL: http://arxiv.org/abs/2410.06440v1
- Date: Wed, 9 Oct 2024 00:48:12 GMT
- Title: Checker Bug Detection and Repair in Deep Learning Libraries
- Authors: Nima Shiri Harzevili, Mohammad Mahdi Mohajer, Jiho Shin, Moshi Wei, Gias Uddin, Jinqiu Yang, Junjie Wang, Song Wang, Zhen Ming, Jiang, Nachiappan Nagappan,
- Abstract summary: Checker bugs in Deep Learning (DL) libraries are critical yet not well-explored.
We present the first comprehensive study of DL checker bugs in two widely-used DL libraries.
We propose ZeroGuard, a proof-of-concept JAXGuard-based tool to detect and fix checker bugs in DL libraries.
- Score: 30.494018435420706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Checker bugs in Deep Learning (DL) libraries are critical yet not well-explored. These bugs are often concealed in the input validation and error-checking code of DL libraries and can lead to silent failures, incorrect results, or unexpected program behavior in DL applications. Despite their potential to significantly impact the reliability and performance of DL-enabled systems built with these libraries, checker bugs have received limited attention. We present the first comprehensive study of DL checker bugs in two widely-used DL libraries, i.e., TensorFlow and PyTorch. Initially, we automatically collected a dataset of 2,418 commits from TensorFlow and PyTorch repositories on GitHub from Sept. 2016 to Dec. 2023 using specific keywords related to checker bugs. Through manual inspection, we identified 527 DL checker bugs. Subsequently, we analyzed these bugs from three perspectives, i.e., root causes, symptoms, and fixing patterns. Using the knowledge gained via root cause analysis of checker bugs, we further propose TensorGuard, a proof-of-concept RAG-based LLM-based tool to detect and fix checker bugs in DL libraries via prompt engineering a series of ChatGPT prompts. We evaluated TensorGuard's performance on a test dataset that includes 92 buggy and 135 clean checker-related changes in TensorFlow and PyTorch from January 2024 to July 2024. Our results demonstrate that TensorGuard has high average recall (94.51\%) using Chain of Thought prompting, a balanced performance between precision and recall using Zero-Shot prompting and Few-Shot prompting strategies. In terms of patch generation, TensorGuard achieves an accuracy of 11.1\%, which outperforms the state-of-the-art bug repair baseline by 2\%. We have also applied TensorGuard on the latest six months' checker-related changes (493 changes) of the JAX library from Google, which resulted in the detection of 64 new checker bugs.
Related papers
- CITADEL: Context Similarity Based Deep Learning Framework Bug Finding [36.34154201748415]
Existing deep learning (DL) framework testing tools have limited coverage on bug types.
We propose Citadel, a method that accelerates the finding of bugs in terms of efficiency and effectiveness.
arXiv Detail & Related papers (2024-06-18T01:51:16Z) - Leveraging Stack Traces for Spectrum-based Fault Localization in the Absence of Failing Tests [44.13331329339185]
We introduce a new approach, SBEST, that integrates stack trace data with test coverage to enhance fault localization.
Our approach shows a significant improvement, increasing Mean Average Precision (MAP) by 32.22% and Mean Reciprocal Rank (MRR) by 17.43% over traditional stack trace ranking methods.
arXiv Detail & Related papers (2024-05-01T15:15:52Z) - GPT-HateCheck: Can LLMs Write Better Functional Tests for Hate Speech Detection? [50.53312866647302]
HateCheck is a suite for testing fine-grained model functionalities on synthesized data.
We propose GPT-HateCheck, a framework to generate more diverse and realistic functional tests from scratch.
Crowd-sourced annotation demonstrates that the generated test cases are of high quality.
arXiv Detail & Related papers (2024-02-23T10:02:01Z) - DebugBench: Evaluating Debugging Capability of Large Language Models [80.73121177868357]
DebugBench is a benchmark for Large Language Models (LLMs)
It covers four major bug categories and 18 minor types in C++, Java, and Python.
We evaluate two commercial and four open-source models in a zero-shot scenario.
arXiv Detail & Related papers (2024-01-09T15:46:38Z) - SkipAnalyzer: A Tool for Static Code Analysis with Large Language Models [12.21559364043576]
SkipAnalyzer is a large language model (LLM)-powered tool for static code analysis.
As a proof-of-concept, SkipAnalyzer is built on ChatGPT, which has exhibited outstanding performance in various software engineering tasks.
arXiv Detail & Related papers (2023-10-27T23:17:42Z) - An Analysis of Bugs In Persistent Memory Application [0.0]
We evaluate an open-sourced automatic bug detector tool (i.e. AGAMOTTO) to test NVM level hashing PM application.
Our faithful validation tool able to discovered 65 new NVM level hashing bugs on PMDK library.
We will propose a Deep-Q Learning search algorithm over the PM-Aware search algorithm to improve the searching strategy efficiently.
arXiv Detail & Related papers (2023-07-19T23:12:01Z) - Automatic Static Bug Detection for Machine Learning Libraries: Are We
There Yet? [14.917820383894124]
We analyze five popular and widely used static bug detectors, i.e., Flawfinder, RATS, Cppcheck, Facebook Infer, and Clang, on a curated dataset of software bugs.
Overall, our study shows that static bug detectors find a negligible amount of all bugs accounting for 6/410 bugs (0.01%), Flawfinder and RATS are the most effective static checker for finding software bugs in machine learning libraries.
arXiv Detail & Related papers (2023-07-09T01:38:52Z) - DeepDebug: Fixing Python Bugs Using Stack Traces, Backtranslation, and
Code Skeletons [5.564793925574796]
We present an approach to automated debug using large, pretrained transformers.
We start by training a bug-creation model on reversed commit data for the purpose of generating synthetic bugs.
Next, we focus on 10K repositories for which we can execute tests, and create buggy versions of all functions that are covered by passing tests.
arXiv Detail & Related papers (2021-05-19T18:40:16Z) - D2A: A Dataset Built for AI-Based Vulnerability Detection Methods Using
Differential Analysis [55.15995704119158]
We propose D2A, a differential analysis based approach to label issues reported by static analysis tools.
We use D2A to generate a large labeled dataset to train models for vulnerability identification.
arXiv Detail & Related papers (2021-02-16T07:46:53Z) - TACRED Revisited: A Thorough Evaluation of the TACRED Relation
Extraction Task [80.38130122127882]
TACRED is one of the largest, most widely used crowdsourced datasets in Relation Extraction (RE)
In this paper, we investigate the questions: Have we reached a performance ceiling or is there still room for improvement?
We find that label errors account for 8% absolute F1 test error, and that more than 50% of the examples need to be relabeled.
arXiv Detail & Related papers (2020-04-30T15:07:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.