Elevating Software Quality in Agile Environments: The Role of Testing Professionals in Unit Testing
- URL: http://arxiv.org/abs/2403.13220v1
- Date: Wed, 20 Mar 2024 00:41:49 GMT
- Title: Elevating Software Quality in Agile Environments: The Role of Testing Professionals in Unit Testing
- Authors: Lucas Neves, Oscar Campos, Robson Santos, Italo Santos, Cleyton Magalhaes, Ronnie de Souza Santos,
- Abstract summary: Testing is an essential quality activity in the software development process.
This paper explores the participation of test engineers in unit testing within an industrial context.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Testing is an essential quality activity in the software development process. Usually, a software system is tested on several levels, starting with unit testing that checks the smallest parts of the code until acceptance testing, which is focused on the validations with the end-user. Historically, unit testing has been the domain of developers, who are responsible for ensuring the accuracy of their code. However, in agile environments, testing professionals play an integral role in various quality improvement initiatives throughout each development cycle. This paper explores the participation of test engineers in unit testing within an industrial context, employing a survey-based research methodology. Our findings demonstrate that testing professionals have the potential to strengthen unit testing by collaborating with developers to craft thorough test cases and fostering a culture of mutual learning and cooperation, ultimately contributing to increasing the overall quality of software projects.
Related papers
- Testing Research Software: An In-Depth Survey of Practices, Methods, and Tools [3.831549883667425]
Testing research software is challenging due to the software's complexity and to the unique culture of the research software community.
This study focuses on test case design, challenges with expected outputs, use of quality metrics, execution methods, tools, and desired tool features.
arXiv Detail & Related papers (2025-01-29T16:27:13Z) - Commit0: Library Generation from Scratch [77.38414688148006]
Commit0 is a benchmark that challenges AI agents to write libraries from scratch.
Agents are provided with a specification document outlining the library's API as well as a suite of interactive unit tests.
Commit0 also offers an interactive environment where models receive static analysis and execution feedback on the code they generate.
arXiv Detail & Related papers (2024-12-02T18:11:30Z) - Disrupting Test Development with AI Assistants [1.024113475677323]
Generative AI-assisted coding tools like GitHub Copilot, ChatGPT, and Tabnine have significantly transformed software development.
This paper analyzes how these innovations impact productivity and software test development metrics.
arXiv Detail & Related papers (2024-11-04T17:52:40Z) - Codev-Bench: How Do LLMs Understand Developer-Centric Code Completion? [60.84912551069379]
We present the Code-Development Benchmark (Codev-Bench), a fine-grained, real-world, repository-level, and developer-centric evaluation framework.
Codev-Agent is an agent-based system that automates repository crawling, constructs execution environments, extracts dynamic calling chains from existing unit tests, and generates new test samples to avoid data leakage.
arXiv Detail & Related papers (2024-10-02T09:11:10Z) - ASTER: Natural and Multi-language Unit Test Generation with LLMs [6.259245181881262]
We describe a generic pipeline that incorporates static analysis to guide LLMs in generating compilable and high-coverage test cases.
We conduct an empirical study to assess the quality of the generated tests in terms of code coverage and test naturalness.
arXiv Detail & Related papers (2024-09-04T21:46:18Z) - A System for Automated Unit Test Generation Using Large Language Models and Assessment of Generated Test Suites [1.4563527353943984]
Large Language Models (LLMs) have been applied to various aspects of software development.
We present AgoneTest: an automated system for generating test suites for Java projects.
arXiv Detail & Related papers (2024-08-14T23:02:16Z) - Prompting Large Language Models to Tackle the Full Software Development Lifecycle: A Case Study [72.24266814625685]
We explore the performance of large language models (LLMs) across the entire software development lifecycle with DevEval.
DevEval features four programming languages, multiple domains, high-quality data collection, and carefully designed and verified metrics for each task.
Empirical studies show that current LLMs, including GPT-4, fail to solve the challenges presented within DevEval.
arXiv Detail & Related papers (2024-03-13T15:13:44Z) - Observation-based unit test generation at Meta [52.4716552057909]
TestGen automatically generates unit tests, carved from serialized observations of complex objects, observed during app execution.
TestGen has landed 518 tests into production, which have been executed 9,617,349 times in continuous integration, finding 5,702 faults.
Our evaluation reveals that, when carving its observations from 4,361 reliable end-to-end tests, TestGen was able to generate tests for at least 86% of the classes covered by end-to-end tests.
arXiv Detail & Related papers (2024-02-09T00:34:39Z) - Software Testing and Code Refactoring: A Survey with Practitioners [3.977213079821398]
This study aims to explore how software testing professionals deal with code to understand the benefits and limitations of this practice in the context of software testing.
We concluded that in the context of software testing, offers several benefits, such as supporting the maintenance of automated tests and improving the performance of the testing team.
Our study raises discussions on the importance of having testing professionals implement in the code of automated tests, allowing them to improve their coding abilities.
arXiv Detail & Related papers (2023-10-03T01:07:39Z) - SUPERNOVA: Automating Test Selection and Defect Prevention in AAA Video
Games Using Risk Based Testing and Machine Learning [62.997667081978825]
Testing video games is an increasingly difficult task as traditional methods fail to scale with growing software systems.
We present SUPERNOVA, a system responsible for test selection and defect prevention while also functioning as an automation hub.
The direct impact of this has been observed to be a reduction in 55% or more testing hours for an undisclosed sports game title.
arXiv Detail & Related papers (2022-03-10T00:47:46Z) - Beyond Accuracy: Behavioral Testing of NLP models with CheckList [66.42971817954806]
CheckList is a task-agnostic methodology for testing NLP models.
CheckList includes a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideation.
In a user study, NLP practitioners with CheckList created twice as many tests, and found almost three times as many bugs as users without it.
arXiv Detail & Related papers (2020-05-08T15:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.