Applying Machine Learning Analysis for Software Quality Test
- URL: http://arxiv.org/abs/2305.09695v1
- Date: Tue, 16 May 2023 06:10:54 GMT
- Title: Applying Machine Learning Analysis for Software Quality Test
- Authors: Al Khan, Remudin Reshid Mekuria, Ruslan Isaev
- Abstract summary: It is critical to comprehend what triggers maintenance and if it may be predicted.
Numerous methods of assessing the complexity of created programs may produce useful prediction models.
In this paper, the machine learning is applied on the available data to calculate the cumulative software failure levels.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the biggest expense in software development is the maintenance.
Therefore, it is critical to comprehend what triggers maintenance and if it may
be predicted. Numerous research have demonstrated that specific methods of
assessing the complexity of created programs may produce useful prediction
models to ascertain the possibility of maintenance due to software failures. As
a routine it is performed prior to the release, and setting up the models
frequently calls for certain, object-oriented software measurements. It is not
always the case that software developers have access to these measurements. In
this paper, the machine learning is applied on the available data to calculate
the cumulative software failure levels. A technique to forecast a software`s
residual defectiveness using machine learning can be looked into as a solution
to the challenge of predicting residual flaws. Software metrics and defect data
were separated out of the static source code repository. Static code is used to
create software metrics, and reported bugs in the repository are used to gather
defect information. By using a correlation method, metrics that had no
connection to the defect data were removed. This makes it possible to analyze
all the data without pausing the programming process. Large, sophisticated
software`s primary issue is that it is impossible to control everything
manually, and the cost of an error can be quite expensive. Developers may miss
errors during testing as a consequence, which will raise maintenance costs.
Finding a method to accurately forecast software defects is the overall
objective.
Related papers
- Agent-Driven Automatic Software Improvement [55.2480439325792]
This research proposal aims to explore innovative solutions by focusing on the deployment of agents powered by Large Language Models (LLMs)
The iterative nature of agents, which allows for continuous learning and adaptation, can help surpass common challenges in code generation.
We aim to use the iterative feedback in these systems to further fine-tune the LLMs underlying the agents, becoming better aligned to the task of automated software improvement.
arXiv Detail & Related papers (2024-06-24T15:45:22Z) - Size biased Multinomial Modelling of detection data in Software testing [1.7532822703595772]
We make use of the bug size or the eventual bug size which helps us to determine reliability of software more precisely.
The model has been validated through simulation and subsequently used for a critical space application software testing data.
arXiv Detail & Related papers (2024-05-24T17:57:34Z) - NExT: Teaching Large Language Models to Reason about Code Execution [50.93581376646064]
Large language models (LLMs) of code are typically trained on the surface textual form of programs.
We propose NExT, a method to teach LLMs to inspect the execution traces of programs and reason about their run-time behavior.
arXiv Detail & Related papers (2024-04-23T01:46:32Z) - Demonstration of a Response Time Based Remaining Useful Life (RUL)
Prediction for Software Systems [0.966840768820136]
Prognostic and Health Management (PHM) has been widely applied to hardware systems in the electronics and non-electronics domains.
This paper addresses the application of PHM concepts to software systems for fault predictions and RUL estimation.
arXiv Detail & Related papers (2023-07-23T06:06:38Z) - Fault-Aware Neural Code Rankers [64.41888054066861]
We propose fault-aware neural code rankers that can predict the correctness of a sampled program without executing it.
Our fault-aware rankers can significantly increase the pass@1 accuracy of various code generation models.
arXiv Detail & Related papers (2022-06-04T22:01:05Z) - SUPERNOVA: Automating Test Selection and Defect Prevention in AAA Video
Games Using Risk Based Testing and Machine Learning [62.997667081978825]
Testing video games is an increasingly difficult task as traditional methods fail to scale with growing software systems.
We present SUPERNOVA, a system responsible for test selection and defect prevention while also functioning as an automation hub.
The direct impact of this has been observed to be a reduction in 55% or more testing hours for an undisclosed sports game title.
arXiv Detail & Related papers (2022-03-10T00:47:46Z) - Machine Learning Techniques for Software Quality Assurance: A Survey [5.33024001730262]
We discuss various approaches in both fault prediction and test case prioritization.
Recent studies deep learning algorithms for fault prediction help to bridge the gap between programs' semantics and fault prediction features.
arXiv Detail & Related papers (2021-04-29T00:37:27Z) - Uncertainty-aware Remaining Useful Life predictor [57.74855412811814]
Remaining Useful Life (RUL) estimation is the problem of inferring how long a certain industrial asset can be expected to operate.
In this work, we consider Deep Gaussian Processes (DGPs) as possible solutions to the aforementioned limitations.
The performance of the algorithms is evaluated on the N-CMAPSS dataset from NASA for aircraft engines.
arXiv Detail & Related papers (2021-04-08T08:50:44Z) - Robust and Transferable Anomaly Detection in Log Data using Pre-Trained
Language Models [59.04636530383049]
Anomalies or failures in large computer systems, such as the cloud, have an impact on a large number of users.
We propose a framework for anomaly detection in log data, as a major troubleshooting source of system information.
arXiv Detail & Related papers (2021-02-23T09:17:05Z) - A Review On Software Defects Prediction Methods [0.0]
We analyze the state of the art machine learning algorithms' performance for software defect classification.
We used seven datasets from the NASA promise dataset repository for this research work.
arXiv Detail & Related papers (2020-10-30T16:10:23Z) - Software Effort Estimation using parameter tuned Models [1.9336815376402716]
The imprecision of the estimation is the reason for Project Failure.
The greatest pitfall of the software industry was the fast-changing nature of software development.
We need the development of useful models that accurately predict the cost of developing a software product.
arXiv Detail & Related papers (2020-08-25T15:18:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.