Software Testing and Code Refactoring: A Survey with Practitioners
- URL: http://arxiv.org/abs/2310.01719v1
- Date: Tue, 3 Oct 2023 01:07:39 GMT
- Title: Software Testing and Code Refactoring: A Survey with Practitioners
- Authors: Danilo Leandro Lima, Ronnie de Souza Santos, Guilherme Pires Garcia,
Sildemir S. da Silva, Cesar Franca, Luiz Fernando Capretz
- Abstract summary: This study aims to explore how software testing professionals deal with code to understand the benefits and limitations of this practice in the context of software testing.
We concluded that in the context of software testing, offers several benefits, such as supporting the maintenance of automated tests and improving the performance of the testing team.
Our study raises discussions on the importance of having testing professionals implement in the code of automated tests, allowing them to improve their coding abilities.
- Score: 3.977213079821398
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, software testing professionals are commonly required to develop
coding skills to work on test automation. One essential skill required from
those who code is the ability to implement code refactoring, a valued quality
aspect of software development; however, software developers usually encounter
obstacles in successfully applying this practice. In this scenario, the present
study aims to explore how software testing professionals (e.g., software
testers, test engineers, test analysts, and software QAs) deal with code
refactoring to understand the benefits and limitations of this practice in the
context of software testing. We followed the guidelines to conduct surveys in
software engineering and applied three sampling techniques, namely convenience
sampling, purposive sampling, and snowballing sampling, to collect data from
testing professionals. We received answers from 80 individuals reporting their
experience refactoring the code of automated tests. We concluded that in the
context of software testing, refactoring offers several benefits, such as
supporting the maintenance of automated tests and improving the performance of
the testing team. However, practitioners might encounter barriers in
effectively implementing this practice, in particular, the lack of interest
from managers and leaders. Our study raises discussions on the importance of
having testing professionals implement refactoring in the code of automated
tests, allowing them to improve their coding abilities.
Related papers
- Testing Research Software: An In-Depth Survey of Practices, Methods, and Tools [3.831549883667425]
Testing research software is challenging due to the software's complexity and to the unique culture of the research software community.
This study focuses on test case design, challenges with expected outputs, use of quality metrics, execution methods, tools, and desired tool features.
arXiv Detail & Related papers (2025-01-29T16:27:13Z) - Codev-Bench: How Do LLMs Understand Developer-Centric Code Completion? [60.84912551069379]
We present the Code-Development Benchmark (Codev-Bench), a fine-grained, real-world, repository-level, and developer-centric evaluation framework.
Codev-Agent is an agent-based system that automates repository crawling, constructs execution environments, extracts dynamic calling chains from existing unit tests, and generates new test samples to avoid data leakage.
arXiv Detail & Related papers (2024-10-02T09:11:10Z) - ASTER: Natural and Multi-language Unit Test Generation with LLMs [6.259245181881262]
We describe a generic pipeline that incorporates static analysis to guide LLMs in generating compilable and high-coverage test cases.
We conduct an empirical study to assess the quality of the generated tests in terms of code coverage and test naturalness.
arXiv Detail & Related papers (2024-09-04T21:46:18Z) - Agent-Driven Automatic Software Improvement [55.2480439325792]
This research proposal aims to explore innovative solutions by focusing on the deployment of agents powered by Large Language Models (LLMs)
The iterative nature of agents, which allows for continuous learning and adaptation, can help surpass common challenges in code generation.
We aim to use the iterative feedback in these systems to further fine-tune the LLMs underlying the agents, becoming better aligned to the task of automated software improvement.
arXiv Detail & Related papers (2024-06-24T15:45:22Z) - Leveraging Large Language Models for Efficient Failure Analysis in Game Development [47.618236610219554]
This paper proposes a new approach to automatically identify which change in the code caused a test to fail.
The method leverages Large Language Models (LLMs) to associate error messages with the corresponding code changes causing the failure.
Our approach reaches an accuracy of 71% in our newly created dataset, which comprises issues reported by developers at EA over a period of one year.
arXiv Detail & Related papers (2024-06-11T09:21:50Z) - A Comprehensive Study on Automated Testing with the Software Lifecycle [0.6144680854063939]
The research examines how automated testing makes it easier to evaluate software quality, how it saves time as compared to manual testing, and how it differs from each of them in terms of benefits and drawbacks.
The process of testing software applications is simplified, customized to certain testing situations, and can be successfully carried out by using automated testing tools.
arXiv Detail & Related papers (2024-05-02T06:30:37Z) - Elevating Software Quality in Agile Environments: The Role of Testing Professionals in Unit Testing [0.0]
Testing is an essential quality activity in the software development process.
This paper explores the participation of test engineers in unit testing within an industrial context.
arXiv Detail & Related papers (2024-03-20T00:41:49Z) - Are We Testing or Being Tested? Exploring the Practical Applications of
Large Language Models in Software Testing [0.0]
A Large Language Model (LLM) represents a cutting-edge artificial intelligence model that generates coherent content.
LLM can play a pivotal role in software development, including software testing.
This study explores the practical application of LLMs in software testing within an industrial setting.
arXiv Detail & Related papers (2023-12-08T06:30:37Z) - Towards Automatic Generation of Amplified Regression Test Oracles [44.45138073080198]
We propose a test oracle derivation approach to amplify regression test oracles.
The approach monitors the object state during test execution and compares it to the previous version to detect any changes in relation to the SUT's intended behaviour.
arXiv Detail & Related papers (2023-07-28T12:38:44Z) - Towards Informed Design and Validation Assistance in Computer Games
Using Imitation Learning [65.12226891589592]
This paper proposes a new approach to automated game validation and testing.
Our method leverages a data-driven imitation learning technique, which requires little effort and time and no knowledge of machine learning or programming.
arXiv Detail & Related papers (2022-08-15T11:08:44Z) - SUPERNOVA: Automating Test Selection and Defect Prevention in AAA Video
Games Using Risk Based Testing and Machine Learning [62.997667081978825]
Testing video games is an increasingly difficult task as traditional methods fail to scale with growing software systems.
We present SUPERNOVA, a system responsible for test selection and defect prevention while also functioning as an automation hub.
The direct impact of this has been observed to be a reduction in 55% or more testing hours for an undisclosed sports game title.
arXiv Detail & Related papers (2022-03-10T00:47:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.