BAFLineDP: Code Bilinear Attention Fusion Framework for Line-Level
Defect Prediction
- URL: http://arxiv.org/abs/2402.07132v1
- Date: Sun, 11 Feb 2024 09:01:42 GMT
- Title: BAFLineDP: Code Bilinear Attention Fusion Framework for Line-Level
Defect Prediction
- Authors: Shaojian Qiu, Huihao Huang, Jianxiang Luo, Yingjie Kuang, Haoyu Luo
- Abstract summary: This paper presents a line-level defect prediction method grounded in a code bilinear attention fusion framework (BAFLineDP)
Our results demonstrate that BAFLineDP outperforms current advanced file-level and line-level defect prediction approaches.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Software defect prediction aims to identify defect-prone code, aiding
developers in optimizing testing resource allocation. Most defect prediction
approaches primarily focus on coarse-grained, file-level defect prediction,
which fails to provide developers with the precision required to locate
defective code. Recently, some researchers have proposed fine-grained,
line-level defect prediction methods. However, most of these approaches lack an
in-depth consideration of the contextual semantics of code lines and neglect
the local interaction information among code lines. To address the above
issues, this paper presents a line-level defect prediction method grounded in a
code bilinear attention fusion framework (BAFLineDP). This method discerns
defective code files and lines by integrating source code line semantics,
line-level context, and local interaction information between code lines and
line-level context. Through an extensive analysis involving within- and
cross-project defect prediction across 9 distinct projects encompassing 32
releases, our results demonstrate that BAFLineDP outperforms current advanced
file-level and line-level defect prediction approaches.
Related papers
- Defect Prediction with Content-based Features [3.765563438775143]
Traditional defect prediction approaches often use metrics that measure the complexity of the design or implementing code of a software system.
In this paper, we explore a different approach based on content of source code.
arXiv Detail & Related papers (2024-09-27T00:49:27Z) - Understanding Defects in Generated Codes by Language Models [0.669087470775851]
This study categorizes and analyzes 367 identified defects from code snippets generated by Large Language Models.
Error categories indicate key areas where LLMs frequently fail, underscoring the need for targeted improvements.
This paper implemented five prompt engineering techniques, including Scratchpad Prompting, Program of Thoughts Prompting, Chain-of-Thought Prompting, Chain-of-Thought Prompting, and Structured Chain-of-Thought Prompting.
arXiv Detail & Related papers (2024-08-23T21:10:09Z) - Factor Graph Optimization of Error-Correcting Codes for Belief Propagation Decoding [62.25533750469467]
Low-Density Parity-Check (LDPC) codes possess several advantages over other families of codes.
The proposed approach is shown to outperform the decoding performance of existing popular codes by orders of magnitude.
arXiv Detail & Related papers (2024-06-09T12:08:56Z) - Defect Category Prediction Based on Multi-Source Domain Adaptation [8.712655828391016]
This paper proposes a multi-source domain adaptation framework that integrates adversarial training and attention mechanisms.
Experiments on 8 real-world open-source projects show that the proposed approach achieves significant performance improvements.
arXiv Detail & Related papers (2024-05-17T03:30:31Z) - Code Revert Prediction with Graph Neural Networks: A Case Study at J.P. Morgan Chase [10.961209762486684]
Code revert prediction aims to forecast or predict the likelihood of code changes being reverted or rolled back in software development.
Previous methods for code defect detection relied on independent features but ignored relationships between code scripts.
This paper presents a systematic empirical study for code revert prediction that integrates the code import graph with code features.
arXiv Detail & Related papers (2024-03-14T15:54:29Z) - Predicting Line-Level Defects by Capturing Code Contexts with
Hierarchical Transformers [4.73194777046253]
Bugsplorer is a novel deep-learning technique for line-level defect prediction.
It can rank the first 20% defective lines within the top 1-3% suspicious lines.
It has the potential to significantly reduce SQA costs by ranking defective lines higher.
arXiv Detail & Related papers (2023-12-19T06:25:04Z) - Continual learning for surface defect segmentation by subnetwork
creation and selection [55.2480439325792]
We introduce a new continual (or lifelong) learning algorithm that performs segmentation tasks without undergoing catastrophic forgetting.
The method is applied to two different surface defect segmentation problems that are learned incrementally.
Our approach shows comparable results with joint training when all the training data (all defects) are seen simultaneously.
arXiv Detail & Related papers (2023-12-08T15:28:50Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - A Universal Error Measure for Input Predictions Applied to Online Graph
Problems [57.58926849872494]
We introduce a novel measure for quantifying the error in input predictions.
The measure captures errors due to absent predicted requests as well as unpredicted actual requests.
arXiv Detail & Related papers (2022-05-25T15:24:03Z) - Autoregressive Belief Propagation for Decoding Block Codes [113.38181979662288]
We revisit recent methods that employ graph neural networks for decoding error correcting codes.
Our method violates the symmetry conditions that enable the other methods to train exclusively with the zero-word.
Despite not having the luxury of training on a single word, and the inability to train on more than a small fraction of the relevant sample space, we demonstrate effective training.
arXiv Detail & Related papers (2021-01-23T17:14:55Z) - Distribution-Free, Risk-Controlling Prediction Sets [112.9186453405701]
We show how to generate set-valued predictions from a black-box predictor that control the expected loss on future test points at a user-specified level.
Our approach provides explicit finite-sample guarantees for any dataset by using a holdout set to calibrate the size of the prediction sets.
arXiv Detail & Related papers (2021-01-07T18:59:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.