A Comprehensive Study on Automated Testing with the Software Lifecycle
- URL: http://arxiv.org/abs/2405.01608v1
- Date: Thu, 2 May 2024 06:30:37 GMT
- Title: A Comprehensive Study on Automated Testing with the Software Lifecycle
- Authors: Hussein Mohammed Ali, Mahmood Yashar Hamza, Tarik Ahmed Rashid,
- Abstract summary: The research examines how automated testing makes it easier to evaluate software quality, how it saves time as compared to manual testing, and how it differs from each of them in terms of benefits and drawbacks.
The process of testing software applications is simplified, customized to certain testing situations, and can be successfully carried out by using automated testing tools.
- Score: 0.6144680854063939
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The software development lifecycle depends heavily on the testing process, which is an essential part of finding issues and reviewing the quality of software. Software testing can be done in two ways: manually and automatically. With an emphasis on its primary function within the software lifecycle, the relevance of testing in general, and the advantages that come with it, this article aims to give a thorough review of automated testing. Finding time- and cost-effective methods for software testing. The research examines how automated testing makes it easier to evaluate software quality, how it saves time as compared to manual testing, and how it differs from each of them in terms of benefits and drawbacks. The process of testing software applications is simplified, customized to certain testing situations, and can be successfully carried out by using automated testing tools.
Related papers
- Historical Test-time Prompt Tuning for Vision Foundation Models [99.96912440427192]
HisTPT is a Historical Test-time Prompt Tuning technique that memorizes the useful knowledge of the learnt test samples.
HisTPT achieves superior prompt tuning performance consistently while handling different visual recognition tasks.
arXiv Detail & Related papers (2024-10-27T06:03:15Z) - Which Combination of Test Metrics Can Predict Success of a Software Project? A Case Study in a Year-Long Project Course [1.553083901660282]
Testing plays an important role in securing the success of a software development project.
We investigate whether we can quantify the effects various types of testing have on functional suitability.
arXiv Detail & Related papers (2024-08-22T04:23:51Z) - Software Testing and Code Refactoring: A Survey with Practitioners [3.977213079821398]
This study aims to explore how software testing professionals deal with code to understand the benefits and limitations of this practice in the context of software testing.
We concluded that in the context of software testing, offers several benefits, such as supporting the maintenance of automated tests and improving the performance of the testing team.
Our study raises discussions on the importance of having testing professionals implement in the code of automated tests, allowing them to improve their coding abilities.
arXiv Detail & Related papers (2023-10-03T01:07:39Z) - Using Machine Learning To Identify Software Weaknesses From Software
Requirement Specifications [49.1574468325115]
This research focuses on finding an efficient machine learning algorithm to identify software weaknesses from requirement specifications.
Keywords extracted using latent semantic analysis help map the CWE categories to PROMISE_exp. Naive Bayes, support vector machine (SVM), decision trees, neural network, and convolutional neural network (CNN) algorithms were tested.
arXiv Detail & Related papers (2023-08-10T13:19:10Z) - Towards Automatic Generation of Amplified Regression Test Oracles [44.45138073080198]
We propose a test oracle derivation approach to amplify regression test oracles.
The approach monitors the object state during test execution and compares it to the previous version to detect any changes in relation to the SUT's intended behaviour.
arXiv Detail & Related papers (2023-07-28T12:38:44Z) - TestLab: An Intelligent Automated Software Testing Framework [0.0]
TestLab is an automated software testing framework that attempts to gather a set of testing methods and automate them using Artificial Intelligence.
The first two modules aim to identify vulnerabilities from different perspectives, while the third module enhances traditional automated software testing by automatically generating test cases.
arXiv Detail & Related papers (2023-06-06T11:45:22Z) - SUPERNOVA: Automating Test Selection and Defect Prevention in AAA Video
Games Using Risk Based Testing and Machine Learning [62.997667081978825]
Testing video games is an increasingly difficult task as traditional methods fail to scale with growing software systems.
We present SUPERNOVA, a system responsible for test selection and defect prevention while also functioning as an automation hub.
The direct impact of this has been observed to be a reduction in 55% or more testing hours for an undisclosed sports game title.
arXiv Detail & Related papers (2022-03-10T00:47:46Z) - Towards Human-Like Automated Test Generation: Perspectives from
Cognition and Problem Solving [13.541347853480705]
We propose a framework based on cognitive science to identify cognitive processes of testers.
Our goal is to be able to mimic how humans create test cases and thus to design more human-like automated test generation systems.
arXiv Detail & Related papers (2021-03-08T13:43:55Z) - On Introducing Automatic Test Case Generation in Practice: A Success
Story and Lessons Learned [7.717446055777458]
This paper reports our experience in introducing techniques for automatically generating system test suites in a medium-size company.
We describe the technical and organisational obstacles that we faced when introducing automatic test case generation.
We present ABT2.0, the test case generator that we developed.
arXiv Detail & Related papers (2021-02-28T11:31:50Z) - Machine Learning for Software Engineering: A Systematic Mapping [73.30245214374027]
The software development industry is rapidly adopting machine learning for transitioning modern day software systems towards highly intelligent and self-learning systems.
No comprehensive study exists that explores the current state-of-the-art on the adoption of machine learning across software engineering life cycle stages.
This study introduces a machine learning for software engineering (MLSE) taxonomy classifying the state-of-the-art machine learning techniques according to their applicability to various software engineering life cycle stages.
arXiv Detail & Related papers (2020-05-27T11:56:56Z) - Beyond Accuracy: Behavioral Testing of NLP models with CheckList [66.42971817954806]
CheckList is a task-agnostic methodology for testing NLP models.
CheckList includes a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideation.
In a user study, NLP practitioners with CheckList created twice as many tests, and found almost three times as many bugs as users without it.
arXiv Detail & Related papers (2020-05-08T15:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.