Object Oriented-Based Metrics to Predict Fault Proneness in Software Design
- URL: http://arxiv.org/abs/2504.08230v1
- Date: Fri, 11 Apr 2025 03:29:15 GMT
- Title: Object Oriented-Based Metrics to Predict Fault Proneness in Software Design
- Authors: Areeb Ahmed Mir, Muhammad Raees, Afzal Ahmed,
- Abstract summary: We look at the relationship between object-oriented software metrics and their implications on fault proneness.<n>Studies indicate that object-oriented metrics are indeed a good predictor of software fault proneness.
- Score: 1.194799054956877
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In object-oriented software design, various metrics predict software systems' fault proneness. Fault predictions can considerably improve the quality of the development process and the software product. In this paper, we look at the relationship between object-oriented software metrics and their implications on fault proneness. Such relationships can help determine metrics that help determine software faults. Studies indicate that object-oriented metrics are indeed a good predictor of software fault proneness, however, there are some differences among existing work as to which metric is most apt for predicting software faults.
Related papers
- A Purpose-oriented Study on Open-source Software Commits and Their Impacts on Software Quality [0.0]
We categorize commits, train prediction models to automate the classification, and investigate how commit quality is impacted by commits of different purposes.
By identifying these impacts, we will establish a new set of guidelines for committing changes that will improve the quality.
arXiv Detail & Related papers (2025-03-04T03:14:57Z) - Agent-Driven Automatic Software Improvement [55.2480439325792]
This research proposal aims to explore innovative solutions by focusing on the deployment of agents powered by Large Language Models (LLMs)
The iterative nature of agents, which allows for continuous learning and adaptation, can help surpass common challenges in code generation.
We aim to use the iterative feedback in these systems to further fine-tune the LLMs underlying the agents, becoming better aligned to the task of automated software improvement.
arXiv Detail & Related papers (2024-06-24T15:45:22Z) - Leveraging Large Language Models for Efficient Failure Analysis in Game Development [47.618236610219554]
This paper proposes a new approach to automatically identify which change in the code caused a test to fail.
The method leverages Large Language Models (LLMs) to associate error messages with the corresponding code changes causing the failure.
Our approach reaches an accuracy of 71% in our newly created dataset, which comprises issues reported by developers at EA over a period of one year.
arXiv Detail & Related papers (2024-06-11T09:21:50Z) - Towards Understanding the Impact of Code Modifications on Software Quality Metrics [1.2277343096128712]
This study aims to assess and interpret the impact of code modifications on software quality metrics.
The underlying hypothesis posits that code modifications inducing similar changes in software quality metrics can be grouped into distinct clusters.
The results reveal distinct clusters of code modifications, each accompanied by a concise description, revealing their collective impact on software quality metrics.
arXiv Detail & Related papers (2024-04-05T08:41:18Z) - Do Internal Software Metrics Have Relationship with Fault-proneness and Change-proneness? [1.9526430269580959]
We identified 25 internal software metrics along with the measures of change-proneness and fault-proneness within the Apache and Eclipse ecosystems.
Most of the metrics have little to no correlation with fault-proneness.
metrics related to inheritance, coupling, and comments showed a moderate to high correlation with change-proneness.
arXiv Detail & Related papers (2023-09-23T07:19:41Z) - Using Machine Learning To Identify Software Weaknesses From Software
Requirement Specifications [49.1574468325115]
This research focuses on finding an efficient machine learning algorithm to identify software weaknesses from requirement specifications.
Keywords extracted using latent semantic analysis help map the CWE categories to PROMISE_exp. Naive Bayes, support vector machine (SVM), decision trees, neural network, and convolutional neural network (CNN) algorithms were tested.
arXiv Detail & Related papers (2023-08-10T13:19:10Z) - Understanding the Challenges of Deploying Live-Traceability Solutions [45.235173351109374]
SAFA.ai is a startup focusing on fine-tuning project-specific models that deliver automated traceability in a near real-time environment.
This paper describes the challenges that characterize commercializing software traceability and highlights possible future directions.
arXiv Detail & Related papers (2023-06-19T14:34:16Z) - Applying Machine Learning Analysis for Software Quality Test [0.0]
It is critical to comprehend what triggers maintenance and if it may be predicted.
Numerous methods of assessing the complexity of created programs may produce useful prediction models.
In this paper, the machine learning is applied on the available data to calculate the cumulative software failure levels.
arXiv Detail & Related papers (2023-05-16T06:10:54Z) - Fault-Aware Neural Code Rankers [64.41888054066861]
We propose fault-aware neural code rankers that can predict the correctness of a sampled program without executing it.
Our fault-aware rankers can significantly increase the pass@1 accuracy of various code generation models.
arXiv Detail & Related papers (2022-06-04T22:01:05Z) - Injecting Planning-Awareness into Prediction and Detection Evaluation [42.228191984697006]
We take a step back and critically assess current evaluation metrics, proposing task-aware metrics as a better measure of performance in systems where they are deployed.
Experiments on an illustrative simulation as well as real-world autonomous driving data validate that our proposed task-aware metrics are able to account for outcome asymmetry and provide a better estimate of a model's closed-loop performance.
arXiv Detail & Related papers (2021-10-07T08:52:48Z) - Software Vulnerability Detection via Deep Learning over Disaggregated
Code Graph Representation [57.92972327649165]
This work explores a deep learning approach to automatically learn the insecure patterns from code corpora.
Because code naturally admits graph structures with parsing, we develop a novel graph neural network (GNN) to exploit both the semantic context and structural regularity of a program.
arXiv Detail & Related papers (2021-09-07T21:24:36Z) - Rethinking Trajectory Forecasting Evaluation [42.228191984697006]
We take a step back and critically evaluate current trajectory forecasting metrics.
We propose task-aware metrics as a better measure of performance in systems where prediction is being deployed.
arXiv Detail & Related papers (2021-07-21T18:20:03Z) - Machine Learning Techniques for Software Quality Assurance: A Survey [5.33024001730262]
We discuss various approaches in both fault prediction and test case prioritization.
Recent studies deep learning algorithms for fault prediction help to bridge the gap between programs' semantics and fault prediction features.
arXiv Detail & Related papers (2021-04-29T00:37:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.