Explainable Software Defect Prediction from Cross Company Project
Metrics Using Machine Learning
- URL: http://arxiv.org/abs/2306.08655v1
- Date: Wed, 14 Jun 2023 17:46:08 GMT
- Title: Explainable Software Defect Prediction from Cross Company Project
Metrics Using Machine Learning
- Authors: Susmita Haldar, Luiz Fernando Capretz
- Abstract summary: This study focuses on developing defect prediction models that apply various machine learning algorithms.
One notable issue in existing defect prediction studies is the lack of transparency in the developed models.
- Score: 5.829545587965401
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predicting the number of defects in a project is critical for project test
managers to allocate budget, resources, and schedule for testing, support and
maintenance efforts. Software Defect Prediction models predict the number of
defects in given projects after training the model with historical defect
related information. The majority of defect prediction studies focused on
predicting defect-prone modules from methods, and class-level static
information, whereas this study predicts defects from project-level information
based on a cross-company project dataset. This study utilizes software sizing
metrics, effort metrics, and defect density information, and focuses on
developing defect prediction models that apply various machine learning
algorithms. One notable issue in existing defect prediction studies is the lack
of transparency in the developed models. Consequently, the explain-ability of
the developed model has been demonstrated using the state-of-the-art post-hoc
model-agnostic method called Shapley Additive exPlanations (SHAP). Finally,
important features for predicting defects from cross-company project
information were identified.
Related papers
- A Probabilistic Perspective on Unlearning and Alignment for Large Language Models [48.96686419141881]
We introduce the first formal probabilistic evaluation framework in Large Language Models (LLMs)
We derive novel metrics with high-probability guarantees concerning the output distribution of a model.
Our metrics are application-independent and allow practitioners to make more reliable estimates about model capabilities before deployment.
arXiv Detail & Related papers (2024-10-04T15:44:23Z) - Defect Category Prediction Based on Multi-Source Domain Adaptation [8.712655828391016]
This paper proposes a multi-source domain adaptation framework that integrates adversarial training and attention mechanisms.
Experiments on 8 real-world open-source projects show that the proposed approach achieves significant performance improvements.
arXiv Detail & Related papers (2024-05-17T03:30:31Z) - Parameter uncertainties for imperfect surrogate models in the low-noise regime [0.3069335774032178]
We analyze the generalization error of misspecified, near-deterministic surrogate models.
We show posterior distributions must cover every training point to avoid a divergent generalization error.
This is demonstrated on model problems before application to thousand dimensional datasets in atomistic machine learning.
arXiv Detail & Related papers (2024-02-02T11:41:21Z) - Learning Sample Difficulty from Pre-trained Models for Reliable
Prediction [55.77136037458667]
We propose to utilize large-scale pre-trained models to guide downstream model training with sample difficulty-aware entropy regularization.
We simultaneously improve accuracy and uncertainty calibration across challenging benchmarks.
arXiv Detail & Related papers (2023-04-20T07:29:23Z) - Measuring Causal Effects of Data Statistics on Language Model's
`Factual' Predictions [59.284907093349425]
Large amounts of training data are one of the major reasons for the high performance of state-of-the-art NLP models.
We provide a language for describing how training data influences predictions, through a causal framework.
Our framework bypasses the need to retrain expensive models and allows us to estimate causal effects based on observational data alone.
arXiv Detail & Related papers (2022-07-28T17:36:24Z) - Defect Prediction Using Stylistic Metrics [2.286041284499166]
This paper aims at analyzing the impact of stylistic metrics on both within-project and crossproject defect prediction.
Experiment is conducted on 14 releases of 5 popular, open source projects.
arXiv Detail & Related papers (2022-06-22T10:11:05Z) - Hessian-based toolbox for reliable and interpretable machine learning in
physics [58.720142291102135]
We present a toolbox for interpretability and reliability, extrapolation of the model architecture.
It provides a notion of the influence of the input data on the prediction at a given test point, an estimation of the uncertainty of the model predictions, and an agnostic score for the model predictions.
Our work opens the road to the systematic use of interpretability and reliability methods in ML applied to physics and, more generally, science.
arXiv Detail & Related papers (2021-08-04T16:32:59Z) - Moving from Cross-Project Defect Prediction to Heterogeneous Defect
Prediction: A Partial Replication Study [0.0]
Earlier studies often used machine learning techniques to build, validate, and improve bug prediction models.
Knowledge coming from those models will not be overlapping to a target project if no sufficient metrics have been collected in the source projects.
We systematically integrated Heterogeneous Defect Prediction (HDP) by replicating and validating the obtained results.
Our results shed light on the infeasibility of many cases for the HDP algorithm due to its sensitivity to the parameter selection.
arXiv Detail & Related papers (2021-03-05T06:29:45Z) - Sufficiently Accurate Model Learning for Planning [119.80502738709937]
This paper introduces the constrained Sufficiently Accurate model learning approach.
It provides examples of such problems, and presents a theorem on how close some approximate solutions can be.
The approximate solution quality will depend on the function parameterization, loss and constraint function smoothness, and the number of samples in model learning.
arXiv Detail & Related papers (2021-02-11T16:27:31Z) - Forethought and Hindsight in Credit Assignment [62.05690959741223]
We work to understand the gains and peculiarities of planning employed as forethought via forward models or as hindsight operating with backward models.
We investigate the best use of models in planning, primarily focusing on the selection of states in which predictions should be (re)-evaluated.
arXiv Detail & Related papers (2020-10-26T16:00:47Z) - Software Defect Prediction Based On Deep Learning Models: Performance
Study [0.5735035463793008]
Two deep learning models, Stack Sparse Auto-Encoder (SSAE) and Deep Belief Network (DBN) are deployed to classify NASA datasets.
According to the conducted experiment, the accuracy for the datasets with sufficient samples is enhanced.
arXiv Detail & Related papers (2020-04-02T06:02:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.