BugsRepo: A Comprehensive Curated Dataset of Bug Reports, Comments and Contributors Information from Bugzilla
- URL: http://arxiv.org/abs/2504.18806v1
- Date: Sat, 26 Apr 2025 05:24:21 GMT
- Title: BugsRepo: A Comprehensive Curated Dataset of Bug Reports, Comments and Contributors Information from Bugzilla
- Authors: Jagrit Acharya, Gouri Ginde,
- Abstract summary: fontfamilypplselectfont BugsRepo is a multifaceted dataset derived from Mozilla projects.<n>It includes a Bug report meta-data & Comments dataset with detailed records for 119,585 fixed or closed and resolved bug reports.<n>Second, fontfamilypplselectfont BugsRepo features a contributor information dataset comprising 19,351 Mozilla community members.<n>Third, the dataset provides a structured bug report subset of 10,351 well-structured bug reports.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bug reports help software development teams enhance software quality, yet their utility is often compromised by unclear or incomplete information. This issue not only hinders developers' ability to quickly understand and resolve bugs but also poses significant challenges for various software maintenance prediction systems, such as bug triaging, severity prediction, and bug report summarization. To address this issue, we introduce \textnormal{{\fontfamily{ppl}\selectfont BugsRepo}}, a multifaceted dataset derived from Mozilla projects that offers three key components to support a wide range of software maintenance tasks. First, it includes a Bug report meta-data & Comments dataset with detailed records for 119,585 fixed or closed and resolved bug reports, capturing fields like severity, creation time, status, and resolution to provide rich contextual insights. Second, {\fontfamily{ppl}\selectfont BugsRepo} features a contributor information dataset comprising 19,351 Mozilla community members, enriched with metadata on user roles, activity history, and contribution metrics such as the number of bugs filed, comments made, and patches reviewed, thus offering valuable information for tasks like developer recommendation. Lastly, the dataset provides a structured bug report subset of 10,351 well-structured bug reports, complete with steps to reproduce, actual behavior, and expected behavior. After this initial filter, a secondary filtering layer is applied using the CTQRS scale. By integrating static metadata, contributor statistics, and detailed comment threads, {\fontfamily{ppl}\selectfont BugsRepo} presents a holistic view of each bug's history, supporting advancements in automated bug report analysis, which can enhance the efficiency and effectiveness of software maintenance processes.
Related papers
- Automated Bug Report Prioritization in Large Open-Source Projects [3.9134031118910264]
We propose a novel approach to automated bug prioritization based on the natural language text of the bug reports.<n>We conduct topic modeling using a variant of LDA called TopicMiner-MTM and text classification with the BERT large language model.<n> Experimental results using an existing reference dataset containing 85,156 bug reports of the Eclipse Platform project indicate that we outperform existing approaches in terms of Accuracy, Precision, Recall, and F1-measure of the bug report priority prediction.
arXiv Detail & Related papers (2025-04-22T13:57:48Z) - GitBugs: Bug Reports for Duplicate Detection, Retrieval Augmented Generation, Triage, and More [0.0]
We present GitBugs, a comprehen-sive and up-to-date dataset of over 150,000 bug reports from nine actively maintained open-source projects.<n>GitBugs aggregates data from Github, Bugzilla and Jira issue trackers, offering standardized categorical fields for classification tasks.<n>It includes ex- ploratory analysis notebooks and detailed project-level statistics, such as duplicate rates and resolution times.
arXiv Detail & Related papers (2025-04-13T16:55:28Z) - Tgea: An error-annotated dataset and benchmark tasks for text generation from pretrained language models [57.758735361535486]
TGEA is an error-annotated dataset for text generation from pretrained language models (PLMs)<n>We create an error taxonomy to cover 24 types of errors occurring in PLM-generated sentences.<n>This is the first dataset with comprehensive annotations for PLM-generated texts.
arXiv Detail & Related papers (2025-03-06T09:14:02Z) - DebugBench: Evaluating Debugging Capability of Large Language Models [80.73121177868357]
DebugBench is a benchmark for Large Language Models (LLMs)
It covers four major bug categories and 18 minor types in C++, Java, and Python.
We evaluate two commercial and four open-source models in a zero-shot scenario.
arXiv Detail & Related papers (2024-01-09T15:46:38Z) - Auto-labelling of Bug Report using Natural Language Processing [0.0]
Rule and Query-based solutions recommend a long list of potential similar bug reports with no clear ranking.
In this paper, we have proposed a solution using a combination of NLP techniques.
It uses a custom data transformer, a deep neural network, and a non-generalizing machine learning method to retrieve existing identical bug reports.
arXiv Detail & Related papers (2022-12-13T02:32:42Z) - Using Developer Discussions to Guide Fixing Bugs in Software [51.00904399653609]
We propose using bug report discussions, which are available before the task is performed and are also naturally occurring, avoiding the need for additional information from developers.
We demonstrate that various forms of natural language context derived from such discussions can aid bug-fixing, even leading to improved performance over using commit messages corresponding to the oracle bug-fixing commits.
arXiv Detail & Related papers (2022-11-11T16:37:33Z) - Automatic Classification of Bug Reports Based on Multiple Text
Information and Reports' Intention [37.67372105858311]
This paper proposes a new automatic classification method for bug reports.
The innovation is that when categorizing bug reports, in addition to using the text information of the report, the intention of the report is also considered.
Our proposed method achieves better performance and its F-Measure achieves from 87.3% to 95.5%.
arXiv Detail & Related papers (2022-08-02T06:44:51Z) - Understanding Factual Errors in Summarization: Errors, Summarizers,
Datasets, Error Detectors [105.12462629663757]
In this work, we aggregate factuality error annotations from nine existing datasets and stratify them according to the underlying summarization model.
We compare performance of state-of-the-art factuality metrics, including recent ChatGPT-based metrics, on this stratified benchmark and show that their performance varies significantly across different types of summarization models.
arXiv Detail & Related papers (2022-05-25T15:26:48Z) - DapStep: Deep Assignee Prediction for Stack Trace Error rePresentation [61.99379022383108]
We propose new deep learning models to solve the bug triage problem.
The models are based on a bidirectional recurrent neural network with attention and on a convolutional neural network.
To improve the quality of ranking, we propose using additional information from version control system annotations.
arXiv Detail & Related papers (2022-01-14T00:16:57Z) - Competency Problems: On Finding and Removing Artifacts in Language Data [50.09608320112584]
We argue that for complex language understanding tasks, all simple feature correlations are spurious.
We theoretically analyze the difficulty of creating data for competency problems when human bias is taken into account.
arXiv Detail & Related papers (2021-04-17T21:34:10Z) - S3M: Siamese Stack (Trace) Similarity Measure [55.58269472099399]
We present S3M -- the first approach to computing stack trace similarity based on deep learning.
It is based on a biLSTM encoder and a fully-connected classifier to compute similarity.
Our experiments demonstrate the superiority of our approach over the state-of-the-art on both open-sourced data and a private JetBrains dataset.
arXiv Detail & Related papers (2021-03-18T21:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.