Identifying Bug Patterns in Quantum Programs
- URL: http://arxiv.org/abs/2103.09069v1
- Date: Tue, 16 Mar 2021 13:43:45 GMT
- Title: Identifying Bug Patterns in Quantum Programs
- Authors: Pengzhan Zhao, Jianjun Zhao and Lei Ma
- Abstract summary: Bug patterns are erroneous code idioms or bad coding practices that have been proved to fail time and time again.
This paper identifies and categorizes some bug patterns in the quantum programming language Qiskit.
- Score: 4.282118876884235
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bug patterns are erroneous code idioms or bad coding practices that have been
proved to fail time and time again, which are usually caused by the
misunderstanding of a programming language's features, the use of erroneous
design patterns, or simple mistakes sharing common behaviors. This paper
identifies and categorizes some bug patterns in the quantum programming
language Qiskit and briefly discusses how to eliminate or prevent those bug
patterns. We take this research as the first step to provide an underlying
basis for debugging and testing quantum programs.
Related papers
- What is a "bug"? On subjectivity, epistemic power, and implications for
software research [8.116831482130555]
"Bug" has been a colloquialism for an engineering "defect" at least since the 1870s.
Most modern software-oriented definitions speak to a disconnect between what a developer intended and what a program actually does.
"Finding bugs is easy" begins by saying "bug patterns are code that are often errors"
arXiv Detail & Related papers (2024-02-13T01:52:42Z) - Q-PAC: Automated Detection of Quantum Bug-Fix Patterns [4.00671924018776]
We present a research agenda (Q-Repair) to improve the quality of quantum software.
The ultimate goal is to utilize machine learning techniques to automatically predict fix patterns for existing quantum bugs.
In the framework, we develop seven bug-fix pattern detectors using abstract syntax trees, syntactic filters, and semantic checks.
arXiv Detail & Related papers (2023-11-29T15:09:32Z) - Understanding and Mitigating Classification Errors Through Interpretable
Token Patterns [58.91023283103762]
Characterizing errors in easily interpretable terms gives insight into whether a classifier is prone to making systematic errors.
We propose to discover those patterns of tokens that distinguish correct and erroneous predictions.
We show that our method, Premise, performs well in practice.
arXiv Detail & Related papers (2023-11-18T00:24:26Z) - Rule-Based Error Classification for Analyzing Differences in Frequent
Errors [0.0]
We classify errors for 95,631 code pairs and identify 3.47 errors on average, which are submitted by various levels of programmers on an online judge system.
The analyzed results show that, as for the same introductory problems, errors made by novices are due to the lack of knowledge in programming.
On the other hand, errors made by experts are due to misunderstandings caused by the carelessness of reading problems or the challenges of solving problems differently than usual.
arXiv Detail & Related papers (2023-11-01T13:36:20Z) - Large Language Models of Code Fail at Completing Code with Potential
Bugs [30.80172644795715]
We study the buggy-code completion problem inspired by real-time code suggestion.
We find that the presence of potential bugs significantly degrades the generation performance of the high-performing Code-LLMs.
arXiv Detail & Related papers (2023-06-06T06:35:27Z) - Explaining Software Bugs Leveraging Code Structures in Neural Machine
Translation [5.079750706023254]
Bugsplainer generates natural language explanations for software bugs by learning from a large corpus of bug-fix commits.
Our evaluation using three performance metrics shows that Bugsplainer can generate understandable and good explanations according to Google's standard.
We also conduct a developer study involving 20 participants where the explanations from Bugsplainer were found to be more accurate, more precise, more concise and more useful than the baselines.
arXiv Detail & Related papers (2022-12-08T22:19:45Z) - Using Developer Discussions to Guide Fixing Bugs in Software [51.00904399653609]
We propose using bug report discussions, which are available before the task is performed and are also naturally occurring, avoiding the need for additional information from developers.
We demonstrate that various forms of natural language context derived from such discussions can aid bug-fixing, even leading to improved performance over using commit messages corresponding to the oracle bug-fixing commits.
arXiv Detail & Related papers (2022-11-11T16:37:33Z) - Shortcomings of Question Answering Based Factuality Frameworks for Error
Localization [51.01957350348377]
We show that question answering (QA)-based factuality metrics fail to correctly identify error spans in generated summaries.
Our analysis reveals a major reason for such poor localization: questions generated by the QG module often inherit errors from non-factual summaries which are then propagated further into downstream modules.
Our experiments conclusively show that there exist fundamental issues with localization using the QA framework which cannot be fixed solely by stronger QA and QG models.
arXiv Detail & Related papers (2022-10-13T05:23:38Z) - Fault-Aware Neural Code Rankers [64.41888054066861]
We propose fault-aware neural code rankers that can predict the correctness of a sampled program without executing it.
Our fault-aware rankers can significantly increase the pass@1 accuracy of various code generation models.
arXiv Detail & Related papers (2022-06-04T22:01:05Z) - Measuring Coding Challenge Competence With APPS [54.22600767666257]
We introduce APPS, a benchmark for code generation.
Our benchmark includes 10,000 problems, which range from having simple one-line solutions to being substantial algorithmic challenges.
Recent models such as GPT-Neo can pass approximately 15% of the test cases of introductory problems.
arXiv Detail & Related papers (2021-05-20T17:58:42Z) - On the Robustness of Language Encoders against Grammatical Errors [66.05648604987479]
We collect real grammatical errors from non-native speakers and conduct adversarial attacks to simulate these errors on clean text data.
Results confirm that the performance of all tested models is affected but the degree of impact varies.
arXiv Detail & Related papers (2020-05-12T11:01:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.