Can LLMs Demystify Bug Reports?
- URL: http://arxiv.org/abs/2310.06310v1
- Date: Tue, 10 Oct 2023 05:07:00 GMT
- Title: Can LLMs Demystify Bug Reports?
- Authors: Laura Plein, Tegawend\'e F. Bissyand\'e
- Abstract summary: ChatGPT was able to demystify and reproduce 50% of the reported bugs.
Being able to automatically address half of the reported bugs shows promising potential in the direction of applying machine learning to address bugs with only a human-in-the-loop to report the bug.
- Score: 0.6650227510403052
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bugs are notoriously challenging: they slow down software users and result in
time-consuming investigations for developers. These challenges are exacerbated
when bugs must be reported in natural language by users. Indeed, we lack
reliable tools to automatically address reported bugs (i.e., enabling their
analysis, reproduction, and bug fixing). With the recent promises created by
LLMs such as ChatGPT for various tasks, including in software engineering, we
ask ourselves: What if ChatGPT could understand bug reports and reproduce them?
This question will be the main focus of this study. To evaluate whether ChatGPT
is capable of catching the semantics of bug reports, we used the popular
Defects4J benchmark with its bug reports. Our study has shown that ChatGPT was
able to demystify and reproduce 50% of the reported bugs. ChatGPT being able to
automatically address half of the reported bugs shows promising potential in
the direction of applying machine learning to address bugs with only a
human-in-the-loop to report the bug.
Related papers
- DebugBench: Evaluating Debugging Capability of Large Language Models [80.73121177868357]
DebugBench is a benchmark for Large Language Models (LLMs)
It covers four major bug categories and 18 minor types in C++, Java, and Python.
We evaluate two commercial and four open-source models in a zero-shot scenario.
arXiv Detail & Related papers (2024-01-09T15:46:38Z) - Exploring ChatGPT's Capabilities on Vulnerability Management [56.4403395100589]
We explore ChatGPT's capabilities on 6 tasks involving the complete vulnerability management process with a large-scale dataset containing 70,346 samples.
One notable example is ChatGPT's proficiency in tasks like generating titles for software bug reports.
Our findings reveal the difficulties encountered by ChatGPT and shed light on promising future directions.
arXiv Detail & Related papers (2023-11-11T11:01:13Z) - Primacy Effect of ChatGPT [69.49920102917598]
We study the primacy effect of ChatGPT: the tendency of selecting the labels at earlier positions as the answer.
We hope that our experiments and analyses provide additional insights into building more reliable ChatGPT-based solutions.
arXiv Detail & Related papers (2023-10-20T00:37:28Z) - Bugsplainer: Leveraging Code Structures to Explain Software Bugs with
Neural Machine Translation [4.519754139322585]
Bugsplainer generates natural language explanations for software bugs by learning from a large corpus of bug-fix commits.
Bugsplainer leverages code structures to reason about a bug and employs the fine-tuned version of a text generation model, CodeT5.
arXiv Detail & Related papers (2023-08-23T17:35:16Z) - Prompting Is All You Need: Automated Android Bug Replay with Large Language Models [28.69675481931385]
We propose AdbGPT, a new lightweight approach to automatically reproduce the bugs from bug reports through prompt engineering.
AdbGPT leverages few-shot learning and chain-of-thought reasoning to elicit human knowledge and logical reasoning from LLMs.
Our evaluations demonstrate the effectiveness and efficiency of our AdbGPT to reproduce 81.3% of bug reports in 253.6 seconds.
arXiv Detail & Related papers (2023-06-03T03:03:52Z) - ChatLog: Carefully Evaluating the Evolution of ChatGPT Across Time [54.18651663847874]
ChatGPT has achieved great success and can be considered to have acquired an infrastructural status.
Existing benchmarks encounter two challenges: (1) Disregard for periodical evaluation and (2) Lack of fine-grained features.
We construct ChatLog, an ever-updating dataset with large-scale records of diverse long-form ChatGPT responses for 21 NLP benchmarks from March, 2023 to now.
arXiv Detail & Related papers (2023-04-27T11:33:48Z) - Is ChatGPT A Good Translator? Yes With GPT-4 As The Engine [97.8609714773255]
We evaluate ChatGPT for machine translation, including translation prompt, multilingual translation, and translation robustness.
ChatGPT performs competitively with commercial translation products but lags behind significantly on low-resource or distant languages.
With the launch of the GPT-4 engine, the translation performance of ChatGPT is significantly boosted.
arXiv Detail & Related papers (2023-01-20T08:51:36Z) - App Review Driven Collaborative Bug Finding [5.024930832959602]
We build on the hypothesis that mobile apps from the same category may be affected by similar bugs in their evolution process.
It is possible to transfer the experience of one historical app to quickly find bugs in its new counterparts.
We design the BugRMSys approach to recommend bug reports for a target app by matching historical bug reports from apps in the same category with user app reviews of the target app.
arXiv Detail & Related papers (2023-01-07T09:38:06Z) - Explaining Software Bugs Leveraging Code Structures in Neural Machine
Translation [5.079750706023254]
Bugsplainer generates natural language explanations for software bugs by learning from a large corpus of bug-fix commits.
Our evaluation using three performance metrics shows that Bugsplainer can generate understandable and good explanations according to Google's standard.
We also conduct a developer study involving 20 participants where the explanations from Bugsplainer were found to be more accurate, more precise, more concise and more useful than the baselines.
arXiv Detail & Related papers (2022-12-08T22:19:45Z) - Using Developer Discussions to Guide Fixing Bugs in Software [51.00904399653609]
We propose using bug report discussions, which are available before the task is performed and are also naturally occurring, avoiding the need for additional information from developers.
We demonstrate that various forms of natural language context derived from such discussions can aid bug-fixing, even leading to improved performance over using commit messages corresponding to the oracle bug-fixing commits.
arXiv Detail & Related papers (2022-11-11T16:37:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.