Identifying Defect-Inducing Changes in Visual Code
- URL: http://arxiv.org/abs/2309.03411v1
- Date: Thu, 7 Sep 2023 00:12:28 GMT
- Title: Identifying Defect-Inducing Changes in Visual Code
- Authors: Kalvin Eng, Abram Hindle, Alexander Senchenko
- Abstract summary: "SZZ Visual Code" (SZZ-VC) is an algorithm that finds changes in visual code based on the differences of graphical elements rather than differences of lines to detect defect-inducing changes.
We validated the algorithm for an industry-made AAA video game and 20 music visual programming defects across 12 open source projects.
- Score: 54.20154707138088
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Defects, or bugs, often form during software development. Identifying the
root cause of defects is essential to improve code quality, evaluate testing
methods, and support defect prediction. Examples of defect-inducing changes can
be found using the SZZ algorithm to trace the textual history of defect-fixing
changes back to the defect-inducing changes that they fix in line-based code.
The line-based approach of the SZZ method is ineffective for visual code that
represents source code graphically rather than textually. In this paper we
adapt SZZ for visual code and present the "SZZ Visual Code" (SZZ-VC) algorithm,
that finds changes in visual code based on the differences of graphical
elements rather than differences of lines to detect defect-inducing changes. We
validated the algorithm for an industry-made AAA video game and 20 music visual
programming defects across 12 open source projects. Our results show that
SZZ-VC is feasible for detecting defects in visual code for 3 different visual
programming languages.
Related papers
- Understanding Code Understandability Improvements in Code Reviews [79.16476505761582]
We analyzed 2,401 code review comments from Java open-source projects on GitHub.
83.9% of suggestions for improvement were accepted and integrated, with fewer than 1% later reverted.
arXiv Detail & Related papers (2024-10-29T12:21:23Z) - Can OpenSource beat ChatGPT? -- A Comparative Study of Large Language Models for Text-to-Code Generation [0.24578723416255752]
We evaluate five different large language models (LLMs) concerning their capabilities for text-to-code generation.
ChatGPT can handle these typical programming challenges by far the most effectively, surpassing even code-specialized models like Code Llama.
arXiv Detail & Related papers (2024-09-06T10:03:49Z) - VDebugger: Harnessing Execution Feedback for Debugging Visual Programs [103.61860743476933]
We introduce V Debugger, a critic-refiner framework trained to localize and debug visual programs by tracking execution step by step.
V Debugger identifies and corrects program errors leveraging detailed execution feedback, improving interpretability and accuracy.
Evaluations on six datasets demonstrate V Debugger's effectiveness, showing performance improvements of up to 3.2% in downstream task accuracy.
arXiv Detail & Related papers (2024-06-19T11:09:16Z) - Code Revert Prediction with Graph Neural Networks: A Case Study at J.P. Morgan Chase [10.961209762486684]
Code revert prediction aims to forecast or predict the likelihood of code changes being reverted or rolled back in software development.
Previous methods for code defect detection relied on independent features but ignored relationships between code scripts.
This paper presents a systematic empirical study for code revert prediction that integrates the code import graph with code features.
arXiv Detail & Related papers (2024-03-14T15:54:29Z) - Between Lines of Code: Unraveling the Distinct Patterns of Machine and Human Programmers [14.018844722021896]
We study the specific patterns that characterize machine- and human-authored code.
We propose DetectCodeGPT, a novel method for detecting machine-generated code.
arXiv Detail & Related papers (2024-01-12T09:15:20Z) - Do LVLMs Understand Charts? Analyzing and Correcting Factual Errors in Chart Captioning [90.13978453378768]
We introduce a comprehensive typology of factual errors in generated chart captions.
A large-scale human annotation effort provides insight into the error patterns and frequencies in captions crafted by various chart captioning models.
Our analysis reveals that even state-of-the-art models, including GPT-4V, frequently produce captions laced with factual inaccuracies.
arXiv Detail & Related papers (2023-12-15T19:16:21Z) - Predicting Defective Visual Code Changes in a Multi-Language AAA Video
Game Project [54.20154707138088]
We focus on constructing visual code defect prediction models that encompass visual code metrics.
We test our models using features extracted from the historical agnostic of a AAA video game project.
We find that defect prediction models have better performance overall in terms of the area under the ROC curve.
arXiv Detail & Related papers (2023-09-07T00:18:43Z) - Software Vulnerability Detection via Deep Learning over Disaggregated
Code Graph Representation [57.92972327649165]
This work explores a deep learning approach to automatically learn the insecure patterns from code corpora.
Because code naturally admits graph structures with parsing, we develop a novel graph neural network (GNN) to exploit both the semantic context and structural regularity of a program.
arXiv Detail & Related papers (2021-09-07T21:24:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.