On Introducing Automatic Test Case Generation in Practice: A Success
Story and Lessons Learned
- URL: http://arxiv.org/abs/2103.00465v1
- Date: Sun, 28 Feb 2021 11:31:50 GMT
- Title: On Introducing Automatic Test Case Generation in Practice: A Success
Story and Lessons Learned
- Authors: Matteo Brunetto, Giovanni Denaro, Leonardo Mariani, Mauro Pezz\`e
- Abstract summary: This paper reports our experience in introducing techniques for automatically generating system test suites in a medium-size company.
We describe the technical and organisational obstacles that we faced when introducing automatic test case generation.
We present ABT2.0, the test case generator that we developed.
- Score: 7.717446055777458
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The level and quality of automation dramatically affects software testing
activities, determines costs and effectiveness of the testing process, and
largely impacts on the quality of the final product. While costs and benefits
of automating many testing activities in industrial practice (including
managing the quality process, executing large test suites, and managing
regression test suites) are well understood and documented, the benefits and
obstacles of automatically generating system test suites in industrial practice
are not well reported yet, despite the recent progresses of automated test case
generation tools. Proprietary tools for automatically generating test cases are
becoming common practice in large software organisations, and commercial tools
are becoming available for some application domains and testing levels.
However, generating system test cases in small and medium-size software
companies is still largely a manual, inefficient and ad-hoc activity. This
paper reports our experience in introducing techniques for automatically
generating system test suites in a medium-size company. We describe the
technical and organisational obstacles that we faced when introducing automatic
test case generation in the development process of the company, and present the
solutions that we successfully experienced in that context. In particular, the
paper discusses the problems of automating the generation of test cases by
referring to a customised ERP application that the medium-size company
developed for a third party multinational company, and presents ABT2.0, the
test case generator that we developed by tailoring ABT, a research
state-of-the-art GUI test generator, to their industrial environment. This
paper presents the new features of ABT2.0, and discusses how these new features
address the issues that we faced.
Related papers
- AutoPT: How Far Are We from the End2End Automated Web Penetration Testing? [54.65079443902714]
We introduce AutoPT, an automated penetration testing agent based on the principle of PSM driven by LLMs.
Our results show that AutoPT outperforms the baseline framework ReAct on the GPT-4o mini model.
arXiv Detail & Related papers (2024-11-02T13:24:30Z) - The Future of Software Testing: AI-Powered Test Case Generation and Validation [0.0]
This paper explores the transformative potential of AI in improving test case generation and validation.
It focuses on its ability to enhance efficiency, accuracy, and scalability in testing processes.
It also addresses key challenges associated with adapting AI for testing, including the need for high quality training data.
arXiv Detail & Related papers (2024-09-09T17:12:40Z) - A System for Automated Unit Test Generation Using Large Language Models and Assessment of Generated Test Suites [1.4563527353943984]
Large Language Models (LLMs) have been applied to various aspects of software development.
We present AgoneTest: an automated system for generating test suites for Java projects.
arXiv Detail & Related papers (2024-08-14T23:02:16Z) - Leveraging Large Language Models for Efficient Failure Analysis in Game Development [47.618236610219554]
This paper proposes a new approach to automatically identify which change in the code caused a test to fail.
The method leverages Large Language Models (LLMs) to associate error messages with the corresponding code changes causing the failure.
Our approach reaches an accuracy of 71% in our newly created dataset, which comprises issues reported by developers at EA over a period of one year.
arXiv Detail & Related papers (2024-06-11T09:21:50Z) - Test Oracle Automation in the era of LLMs [52.69509240442899]
Large Language Models (LLMs) have demonstrated remarkable proficiency in tackling diverse software testing tasks.
This paper aims to enable discussions on the potential of using LLMs for test oracle automation, along with the challenges that may emerge during the generation of various types of oracles.
arXiv Detail & Related papers (2024-05-21T13:19:10Z) - A Comprehensive Study on Automated Testing with the Software Lifecycle [0.6144680854063939]
The research examines how automated testing makes it easier to evaluate software quality, how it saves time as compared to manual testing, and how it differs from each of them in terms of benefits and drawbacks.
The process of testing software applications is simplified, customized to certain testing situations, and can be successfully carried out by using automated testing tools.
arXiv Detail & Related papers (2024-05-02T06:30:37Z) - Automated Test Case Repair Using Language Models [0.5708902722746041]
Unrepaired broken test cases can degrade test suite quality and disrupt the software development process.
We present TaRGet, a novel approach leveraging pre-trained code language models for automated test case repair.
TaRGet treats test repair as a language translation task, employing a two-step process to fine-tune a language model.
arXiv Detail & Related papers (2024-01-12T18:56:57Z) - Towards Automatic Generation of Amplified Regression Test Oracles [44.45138073080198]
We propose a test oracle derivation approach to amplify regression test oracles.
The approach monitors the object state during test execution and compares it to the previous version to detect any changes in relation to the SUT's intended behaviour.
arXiv Detail & Related papers (2023-07-28T12:38:44Z) - Constraint-Guided Test Execution Scheduling: An Experience Report at ABB
Robotics [13.50507740574158]
We present the results of a project called DynTest whose goal is to automate the scheduling of test execution from a large test repository.
This paper reports on our experience and lessons learned for successfully transferring constraint-based optimization models for test execution scheduling at ABB Robotics.
arXiv Detail & Related papers (2023-06-02T13:29:32Z) - SUPERNOVA: Automating Test Selection and Defect Prevention in AAA Video
Games Using Risk Based Testing and Machine Learning [62.997667081978825]
Testing video games is an increasingly difficult task as traditional methods fail to scale with growing software systems.
We present SUPERNOVA, a system responsible for test selection and defect prevention while also functioning as an automation hub.
The direct impact of this has been observed to be a reduction in 55% or more testing hours for an undisclosed sports game title.
arXiv Detail & Related papers (2022-03-10T00:47:46Z) - Dynamic Causal Effects Evaluation in A/B Testing with a Reinforcement
Learning Framework [68.96770035057716]
A/B testing is a business strategy to compare a new product with an old one in pharmaceutical, technological, and traditional industries.
This paper introduces a reinforcement learning framework for carrying A/B testing in online experiments.
arXiv Detail & Related papers (2020-02-05T10:25:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.