A Comprehensive Study on Automated Testing with the Software Lifecycle
- URL: http://arxiv.org/abs/2405.01608v1
- Date: Thu, 2 May 2024 06:30:37 GMT
- Title: A Comprehensive Study on Automated Testing with the Software Lifecycle
- Authors: Hussein Mohammed Ali, Mahmood Yashar Hamza, Tarik Ahmed Rashid,
- Abstract summary: The research examines how automated testing makes it easier to evaluate software quality, how it saves time as compared to manual testing, and how it differs from each of them in terms of benefits and drawbacks.
The process of testing software applications is simplified, customized to certain testing situations, and can be successfully carried out by using automated testing tools.
- Score: 0.6144680854063939
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The software development lifecycle depends heavily on the testing process, which is an essential part of finding issues and reviewing the quality of software. Software testing can be done in two ways: manually and automatically. With an emphasis on its primary function within the software lifecycle, the relevance of testing in general, and the advantages that come with it, this article aims to give a thorough review of automated testing. Finding time- and cost-effective methods for software testing. The research examines how automated testing makes it easier to evaluate software quality, how it saves time as compared to manual testing, and how it differs from each of them in terms of benefits and drawbacks. The process of testing software applications is simplified, customized to certain testing situations, and can be successfully carried out by using automated testing tools.
Related papers
- Testing Research Software: An In-Depth Survey of Practices, Methods, and Tools [3.831549883667425]
Testing research software is challenging due to the software's complexity and to the unique culture of the research software community.
This study focuses on test case design, challenges with expected outputs, use of quality metrics, execution methods, tools, and desired tool features.
arXiv Detail & Related papers (2025-01-29T16:27:13Z) - Automated Soap Opera Testing Directed by LLMs and Scenario Knowledge: Feasibility, Challenges, and Road Ahead [43.15092098658384]
Exploratory testing (ET) harnesses tester's knowledge, creativity, and experience to create varying tests that uncover unexpected bugs from the end-user's perspective.
We explore the feasibility, challenges and road ahead of automated scenario-based ET (a.k.a soap opera testing)
arXiv Detail & Related papers (2024-12-11T17:57:23Z) - Historical Test-time Prompt Tuning for Vision Foundation Models [99.96912440427192]
HisTPT is a Historical Test-time Prompt Tuning technique that memorizes the useful knowledge of the learnt test samples.
HisTPT achieves superior prompt tuning performance consistently while handling different visual recognition tasks.
arXiv Detail & Related papers (2024-10-27T06:03:15Z) - Which Combination of Test Metrics Can Predict Success of a Software Project? A Case Study in a Year-Long Project Course [1.553083901660282]
Testing plays an important role in securing the success of a software development project.
We investigate whether we can quantify the effects various types of testing have on functional suitability.
arXiv Detail & Related papers (2024-08-22T04:23:51Z) - Leveraging Large Language Models for Efficient Failure Analysis in Game Development [47.618236610219554]
This paper proposes a new approach to automatically identify which change in the code caused a test to fail.
The method leverages Large Language Models (LLMs) to associate error messages with the corresponding code changes causing the failure.
Our approach reaches an accuracy of 71% in our newly created dataset, which comprises issues reported by developers at EA over a period of one year.
arXiv Detail & Related papers (2024-06-11T09:21:50Z) - An empirical study of testing machine learning in the wild [35.13282520395855]
Machine and deep learning (ML/DL) algorithms have been increasingly adopted in many software systems.
Due to their inductive nature, ensuring the quality of these systems remains a significant challenge for the research community.
Recent research in ML/DL quality assurance has adapted concepts from traditional software testing, such as mutation testing, to improve reliability.
arXiv Detail & Related papers (2023-12-19T21:18:14Z) - Software Testing and Code Refactoring: A Survey with Practitioners [3.977213079821398]
This study aims to explore how software testing professionals deal with code to understand the benefits and limitations of this practice in the context of software testing.
We concluded that in the context of software testing, offers several benefits, such as supporting the maintenance of automated tests and improving the performance of the testing team.
Our study raises discussions on the importance of having testing professionals implement in the code of automated tests, allowing them to improve their coding abilities.
arXiv Detail & Related papers (2023-10-03T01:07:39Z) - Towards Automatic Generation of Amplified Regression Test Oracles [44.45138073080198]
We propose a test oracle derivation approach to amplify regression test oracles.
The approach monitors the object state during test execution and compares it to the previous version to detect any changes in relation to the SUT's intended behaviour.
arXiv Detail & Related papers (2023-07-28T12:38:44Z) - TestLab: An Intelligent Automated Software Testing Framework [0.0]
TestLab is an automated software testing framework that attempts to gather a set of testing methods and automate them using Artificial Intelligence.
The first two modules aim to identify vulnerabilities from different perspectives, while the third module enhances traditional automated software testing by automatically generating test cases.
arXiv Detail & Related papers (2023-06-06T11:45:22Z) - SUPERNOVA: Automating Test Selection and Defect Prevention in AAA Video
Games Using Risk Based Testing and Machine Learning [62.997667081978825]
Testing video games is an increasingly difficult task as traditional methods fail to scale with growing software systems.
We present SUPERNOVA, a system responsible for test selection and defect prevention while also functioning as an automation hub.
The direct impact of this has been observed to be a reduction in 55% or more testing hours for an undisclosed sports game title.
arXiv Detail & Related papers (2022-03-10T00:47:46Z) - Towards Human-Like Automated Test Generation: Perspectives from
Cognition and Problem Solving [13.541347853480705]
We propose a framework based on cognitive science to identify cognitive processes of testers.
Our goal is to be able to mimic how humans create test cases and thus to design more human-like automated test generation systems.
arXiv Detail & Related papers (2021-03-08T13:43:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.