Identifying Bug Inducing Commits by Combining Fault Localisation and Code Change Histories
- URL: http://arxiv.org/abs/2502.12922v2
- Date: Wed, 19 Feb 2025 11:03:37 GMT
- Title: Identifying Bug Inducing Commits by Combining Fault Localisation and Code Change Histories
- Authors: Gabin An, Jinsu Choi, Jingun Hong, Naryeong Kim, Shin Yoo,
- Abstract summary: A Bug Inducing Commit (BIC) is a code change that introduces a bug into the commit.<n>We propose a technique called Fonte that aims to identify the BIC with a core concept that a commit is more likely to be a BIC if it has more recently modified code elements.<n>Fonte significantly outperforms state-of-the-art BIC identification techniques, achieving up to 45.8% higher MRR.
- Score: 10.027862394831669
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A Bug Inducing Commit (BIC) is a code change that introduces a bug into the codebase. Although the abnormal or unexpected behavior caused by the bug may not manifest immediately, it will eventually lead to program failures further down the line. When such a program failure is observed, identifying the relevant BIC can aid in the bug resolution process, because knowing the original intent and context behind the code change, as well as having a link to the author of that change, can facilitate bug triaging and debugging. However, existing BIC identification techniques have limitations. Bisection can be computationally expensive because it requires executing failing tests against previous versions of the codebase. Other techniques rely on the availability of specific post hoc artifacts, such as bug reports or bug fixes. In this paper, we propose a technique called Fonte that aims to identify the BIC with a core concept that a commit is more likely to be a BIC if it has more recently modified code elements that are highly suspicious of containing the bug. To realise this idea, Fonte leverages two fundamental relationships in software: the failure-to-code relationship, which can be quantified through fault localisation techniques, and the code-to-commit relationship, which can be obtained from version control systems. Our empirical evaluation using 206 real-world BICs from open-source Java projects shows that Fonte significantly outperforms state-of-the-art BIC identification techniques, achieving up to 45.8% higher MRR. We also report that the ranking scores produced by Fonte can be used to perform weighted bisection. Finally, we apply Fonte to a large-scale industry project with over 10M lines of code, and show that it can rank the actual BIC within the top five commits for 87% of the studied real batch-testing failures, and save the BIC inspection cost by 32% on average.
Related papers
- AlgoVeri: An Aligned Benchmark for Verified Code Generation on Classical Algorithms [54.99368693313797]
Existing benchmarks test only individual languages/tools, so the performance numbers are not directly comparable.<n>We address this gap with AlgoVeri, a benchmark that evaluates vericoding of $77$ classical algorithms in Dafny, Verus, and Lean.
arXiv Detail & Related papers (2026-02-10T06:58:26Z) - Overcoming Joint Intractability with Lossless Hierarchical Speculative Decoding [58.92526489742584]
We propose provably lossless.<n> verification method that significantly boosts the expected number of accepted tokens.<n>We show that HSD yields consistent improvements in acceptance rates across diverse model families and benchmarks.
arXiv Detail & Related papers (2026-01-09T11:10:29Z) - Using a Sledgehammer to Crack a Nut? Revisiting Automated Compiler Fault Isolation [9.699231545580806]
This study aims to directly compare a BIC-based strategy, Basic, with representative SBFL techniques in the context of compiler fault localization.<n>Basic identifies the most recent good release and earliest bad release, and then employs a binary search to pinpoint the bug-inducing commit.<n>We rigorously compare Basic against SBFL-based techniques using a benchmark consisting of 60 GCC bugs and 60 LLVM bugs.
arXiv Detail & Related papers (2025-12-18T09:22:57Z) - How Do Semantically Equivalent Code Transformations Impact Membership Inference on LLMs for Code? [56.42119949944239]
We investigate whether semantically equivalent code transformation rules might be leveraged to evade MI detection.<n>We find that model accuracy drops by only 1.5% in the worst case for each rule.<n>Our results expose a critical loophole in license compliance enforcement for training large language models for code.
arXiv Detail & Related papers (2025-12-17T14:12:54Z) - LLMBisect: Breaking Barriers in Bug Bisection with A Comparative Analysis Pipeline [35.18683484280968]
Large Language Models (LLMs) are well-positioned to break the barriers of existing solutions.<n>LLMs comprehend both textual data and code in patches and commits.<n>Our approach achieves significantly better accuracy than the state-of-the-art solution by more than 38%.
arXiv Detail & Related papers (2025-10-30T02:47:25Z) - BugPilot: Complex Bug Generation for Efficient Learning of SWE Skills [59.003563837981886]
High quality bugs are key to training the next generation of language model based software engineering (SWE) agents.<n>We introduce a novel method for synthetic generation of difficult and diverse bugs.
arXiv Detail & Related papers (2025-10-22T17:58:56Z) - Decompiling Smart Contracts with a Large Language Model [51.49197239479266]
Despite Etherscan's 78,047,845 smart contracts deployed on (as of May 26, 2025), a mere 767,520 ( 1%) are open source.<n>This opacity necessitates the automated semantic analysis of on-chain smart contract bytecode.<n>We introduce a pioneering decompilation pipeline that transforms bytecode into human-readable and semantically faithful Solidity code.
arXiv Detail & Related papers (2025-06-24T13:42:59Z) - Is Compression Really Linear with Code Intelligence? [60.123628177110206]
textitFormat Annealing is a lightweight, transparent training methodology designed to assess the intrinsic capabilities of pre-trained models equitably.<n>Our empirical results reveal a fundamental logarithmic relationship between measured code intelligence and bits-per-character (BPC)<n>Our work provides a more nuanced understanding of compression's role in developing code intelligence and contributes a robust evaluation framework in the code domain.
arXiv Detail & Related papers (2025-05-16T16:59:14Z) - Identifying Root Cause of bugs by Capturing Changed Code Lines with Relational Graph Neural Networks [7.676213873923721]
We propose a method called RC-Detection to detect root-cause deletion lines in changed code lines.<n>RC-Detection is used to detect root-cause deletion lines in changed code lines, thereby identifying the root cause of introduced bugs in bug-fixing commits.<n>Our experiments show that, compared to the most advanced root cause detection methods, RC-Detection improved Recall@1, Recall@2, Recall@3, and MFR by at 4.107%, 5.113%, 4.289%, and 24.536%, respectively.
arXiv Detail & Related papers (2025-05-02T04:29:09Z) - Improved IR-based Bug Localization with Intelligent Relevance Feedback [2.9312156642007294]
Software bugs pose a significant challenge during development and maintenance, and practitioners spend nearly 50% of their time dealing with bugs.<n>Many existing techniques adopt Information Retrieval (IR) to localize a reported bug using textual and semantic relevance between bug reports and source code.<n>We present a novel technique for bug localization - BRaIn - that addresses the contextual gaps by assessing the relevance between bug reports and code.
arXiv Detail & Related papers (2025-01-17T20:29:38Z) - Understanding Code Understandability Improvements in Code Reviews [79.16476505761582]
We analyzed 2,401 code review comments from Java open-source projects on GitHub.
83.9% of suggestions for improvement were accepted and integrated, with fewer than 1% later reverted.
arXiv Detail & Related papers (2024-10-29T12:21:23Z) - Just-In-Time Software Defect Prediction via Bi-modal Change Representation Learning [5.04327119462716]
We introduce a novel bi-modal change pre-training model called BiCC-BERT.
BiCC-BERT is pre-trained on a code change corpus to learn bi-modal semantic representations.
We train JIT-BiCC using 27,391 code changes and compare its performance with 8 state-of-the-art JIT-DP approaches.
arXiv Detail & Related papers (2024-10-15T23:13:29Z) - Supporting Cross-language Cross-project Bug Localization Using Pre-trained Language Models [2.5121668584771837]
Existing techniques often struggle with generalizability and deployment due to their reliance on application-specific data.
This paper proposes a novel pre-trained language model (PLM) based technique for bug localization that transcends project and language boundaries.
arXiv Detail & Related papers (2024-07-03T01:09:36Z) - Investigating the Transferability of Code Repair for Low-Resource Programming Languages [57.62712191540067]
Large language models (LLMs) have shown remarkable performance on code generation tasks.
Recent works augment the code repair process by integrating modern techniques such as chain-of-thought reasoning or distillation.
We investigate the benefits of distilling code repair for both high and low resource languages.
arXiv Detail & Related papers (2024-06-21T05:05:39Z) - Chain of Targeted Verification Questions to Improve the Reliability of Code Generated by LLMs [10.510325069289324]
We propose a self-refinement method aimed at improving the reliability of code generated by LLMs.
Our approach is based on targeted Verification Questions (VQs) to identify potential bugs within the initial code.
Our method attempts to repair these potential bugs by re-prompting the LLM with the targeted VQs and the initial code.
arXiv Detail & Related papers (2024-05-22T19:02:50Z) - FoC: Figure out the Cryptographic Functions in Stripped Binaries with LLMs [51.898805184427545]
We propose a novel framework called FoC to Figure out the Cryptographic functions in stripped binaries.
We first build a binary large language model (FoC-BinLLM) to summarize the semantics of cryptographic functions in natural language.
We then build a binary code similarity model (FoC-Sim) upon the FoC-BinLLM to create change-sensitive representations and use it to retrieve similar implementations of unknown cryptographic functions in a database.
arXiv Detail & Related papers (2024-03-27T09:45:33Z) - Theoretically Achieving Continuous Representation of Oriented Bounding Boxes [64.15627958879053]
This paper endeavors to completely solve the issue of discontinuity in Oriented Bounding Box representation.
We propose a novel representation method called Continuous OBB (COBB) which can be readily integrated into existing detectors.
For fairness and transparency of experiments, we have developed a modularized benchmark based on the open-source deep learning framework Jittor's detection toolbox JDet for OOD evaluation.
arXiv Detail & Related papers (2024-02-29T09:27:40Z) - Knowledge-Augmented Language Model Verification [68.6099592486075]
Recent Language Models (LMs) have shown impressive capabilities in generating texts with the knowledge internalized in parameters.
We propose to verify the output and the knowledge of the knowledge-augmented LMs with a separate verifier.
Our results show that the proposed verifier effectively identifies retrieval and generation errors, allowing LMs to provide more factually correct outputs.
arXiv Detail & Related papers (2023-10-19T15:40:00Z) - Pre-training Code Representation with Semantic Flow Graph for Effective
Bug Localization [4.159296619915587]
We propose a novel directed, multiple-label code graph representation named Semantic Flow Graph (SFG)
We show that our method achieves state-of-the-art performance in bug localization.
arXiv Detail & Related papers (2023-08-24T13:25:17Z) - A New Era in Software Security: Towards Self-Healing Software via Large Language Models and Formal Verification [8.733354577147093]
This paper introduces an innovative approach that combines Large Language Models (LLMs) with Formal Verification strategies for automatic software vulnerability repair.
We present the ESBMC-AI framework as a proof of concept, leveraging the well-recognized and industry-adopted Efficient SMT-based Context-Bounded Model Checker (ESBMC) and a pre-trained transformer model.
Our results demonstrate ESBMC-AI's capability to automate the detection and repair of issues such as buffer overflow, arithmetic overflow, and pointer dereference failures with high accuracy.
arXiv Detail & Related papers (2023-05-24T05:54:10Z) - Generation Probabilities Are Not Enough: Uncertainty Highlighting in AI Code Completions [54.55334589363247]
We study whether conveying information about uncertainty enables programmers to more quickly and accurately produce code.
We find that highlighting tokens with the highest predicted likelihood of being edited leads to faster task completion and more targeted edits.
arXiv Detail & Related papers (2023-02-14T18:43:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.