Deep Learning Based Concurrency Bug Detection and Localization
- URL: http://arxiv.org/abs/2508.20911v1
- Date: Thu, 28 Aug 2025 15:40:20 GMT
- Title: Deep Learning Based Concurrency Bug Detection and Localization
- Authors: Zuocheng Feng, Kaiwen Zhang, Miaomiao Wang, Yiming Cheng, Yuandao Cai, Xiaofeng Li, Guanjun Liu,
- Abstract summary: Concurrency bugs are notoriously hard to detect and compromise software reliability and security.<n>Existing deep learning methods face three main limitations.<n>We propose a novel method for effective bug detection as well as localization.
- Score: 9.2389985253336
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Concurrency bugs, caused by improper synchronization of shared resources in multi-threaded or distributed systems, are notoriously hard to detect and thus compromise software reliability and security. The existing deep learning methods face three main limitations. First, there is an absence of large and dedicated datasets of diverse concurrency bugs for them. Second, they lack sufficient representation of concurrency semantics. Third, binary classification results fail to provide finer-grained debug information such as precise bug lines. To address these problems, we propose a novel method for effective concurrency bug detection as well as localization. We construct a dedicated concurrency bug dataset to facilitate model training and evaluation. We then integrate a pre-trained model with a heterogeneous graph neural network (GNN), by incorporating a new Concurrency-Aware Code Property Graph (CCPG) that concisely and effectively characterizes concurrency semantics. To further facilitate debugging, we employ SubgraphX, a GNN-based interpretability method, which explores the graphs to precisely localize concurrency bugs, mapping them to specific lines of source code. On average, our method demonstrates an improvement of 10\% in accuracy and precision and 26\% in recall compared to state-of-the-art methods across diverse evaluation settings.
Related papers
- Identifying Concurrency Bug Reports via Linguistic Patterns [5.794959117360381]
This paper presents a linguistic-pattern-based framework for automatically identifying bug reports.<n>We derive 58 distinct linguistic patterns from 730 manually labeled bug reports.<n>We evaluate four complementary approaches-matching, learning, prompt-based, and fine-tuning-spanning traditional machine learning, large language models, and pre-trained language models.
arXiv Detail & Related papers (2026-01-22T21:54:14Z) - Adapting Language Balance in Code-Switching Speech [60.296574524609575]
Large foundational models still struggle against code-switching test cases.<n>We use differentiable surrogates to mitigate context bias during generation.<n>Experiments with Arabic and Chinese-English showed that the models are able to predict the switching places more correctly.
arXiv Detail & Related papers (2025-10-21T15:23:55Z) - Supporting Cross-language Cross-project Bug Localization Using Pre-trained Language Models [2.5121668584771837]
Existing techniques often struggle with generalizability and deployment due to their reliance on application-specific data.
This paper proposes a novel pre-trained language model (PLM) based technique for bug localization that transcends project and language boundaries.
arXiv Detail & Related papers (2024-07-03T01:09:36Z) - Provably Secure Disambiguating Neural Linguistic Steganography [66.30965740387047]
The segmentation ambiguity problem, which arises when using language models based on subwords, leads to occasional decoding failures.<n>We propose a novel secure disambiguation method named SyncPool, which effectively addresses the segmentation ambiguity problem.<n> SyncPool does not change the size of the candidate pool or the distribution of tokens and thus is applicable to provably secure language steganography methods.
arXiv Detail & Related papers (2024-03-26T09:25:57Z) - Pre-training Code Representation with Semantic Flow Graph for Effective
Bug Localization [4.159296619915587]
We propose a novel directed, multiple-label code graph representation named Semantic Flow Graph (SFG)
We show that our method achieves state-of-the-art performance in bug localization.
arXiv Detail & Related papers (2023-08-24T13:25:17Z) - Supervised Auto-Encoding Twin-Bottleneck Hashing [5.653113092257149]
Auto-encoding Twin-bottleneck Hashing is one such method that dynamically builds the graph.
In this work, we generalize the original model into a supervised deep hashing network by incorporating the label information.
arXiv Detail & Related papers (2023-06-19T18:50:02Z) - ConNER: Consistency Training for Cross-lingual Named Entity Recognition [96.84391089120847]
Cross-lingual named entity recognition suffers from data scarcity in the target languages.
We propose ConNER as a novel consistency training framework for cross-lingual NER.
arXiv Detail & Related papers (2022-11-17T07:57:54Z) - GLENet: Boosting 3D Object Detectors with Generative Label Uncertainty Estimation [70.75100533512021]
In this paper, we formulate the label uncertainty problem as the diversity of potentially plausible bounding boxes of objects.
We propose GLENet, a generative framework adapted from conditional variational autoencoders, to model the one-to-many relationship between a typical 3D object and its potential ground-truth bounding boxes with latent variables.
The label uncertainty generated by GLENet is a plug-and-play module and can be conveniently integrated into existing deep 3D detectors.
arXiv Detail & Related papers (2022-07-06T06:26:17Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - Generating Bug-Fixes Using Pretrained Transformers [11.012132897417592]
We introduce a data-driven program repair approach which learns to detect and fix bugs in Java methods mined from real-world GitHub.
We show that pretraining on source code programs improves the number of patches found by 33% as compared to supervised training from scratch.
We refine the standard accuracy evaluation metric into non-deletion and deletion-only fixes, and show that our best model generates 75% more non-deletion fixes than the previous state of the art.
arXiv Detail & Related papers (2021-04-16T05:27:04Z) - PushNet: Efficient and Adaptive Neural Message Passing [1.9121961872220468]
Message passing neural networks have recently evolved into a state-of-the-art approach to representation learning on graphs.
Existing methods perform synchronous message passing along all edges in multiple subsequent rounds.
We consider a novel asynchronous message passing approach where information is pushed only along the most relevant edges until convergence.
arXiv Detail & Related papers (2020-03-04T18:15:30Z) - Auto-Encoding Twin-Bottleneck Hashing [141.5378966676885]
This paper proposes an efficient and adaptive code-driven graph.
It is updated by decoding in the context of an auto-encoder.
Experiments on benchmarked datasets clearly show the superiority of our framework over the state-of-the-art hashing methods.
arXiv Detail & Related papers (2020-02-27T05:58:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.