Automated Support for Unit Test Generation: A Tutorial Book Chapter
- URL: http://arxiv.org/abs/2110.13575v1
- Date: Tue, 26 Oct 2021 11:13:40 GMT
- Title: Automated Support for Unit Test Generation: A Tutorial Book Chapter
- Authors: Afonso Fontes, Gregory Gay, Francisco Gomes de Oliveira Neto, Robert
Feldt
- Abstract summary: Unit testing is a stage of testing where the smallest segment of code that can be tested in isolation from the rest of the system is tested.
Unit tests are typically written as executable code, often in a format provided by a unit testing framework such as pytest for Python.
This chapter introduces the concept of search-based unit test generation.
- Score: 21.716667622896193
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unit testing is a stage of testing where the smallest segment of code that
can be tested in isolation from the rest of the system - often a class - is
tested. Unit tests are typically written as executable code, often in a format
provided by a unit testing framework such as pytest for Python.
Creating unit tests is a time and effort-intensive process with many
repetitive, manual elements. To illustrate how AI can support unit testing,
this chapter introduces the concept of search-based unit test generation. This
technique frames the selection of test input as an optimization problem - we
seek a set of test cases that meet some measurable goal of a tester - and
unleashes powerful metaheuristic search algorithms to identify the best
possible test cases within a restricted timeframe. This chapter introduces two
algorithms that can generate pytest-formatted unit tests, tuned towards
coverage of source code statements. The chapter concludes by discussing more
advanced concepts and gives pointers to further reading for how artificial
intelligence can support developers and testers when unit testing software.
Related papers
- Harnessing the Power of LLMs: Automating Unit Test Generation for High-Performance Computing [7.3166218350585135]
Unit testing is crucial in software engineering for ensuring quality.
It's not widely used in parallel and high-performance computing software, particularly scientific applications.
We propose an automated method for generating unit tests for such software.
arXiv Detail & Related papers (2024-07-06T22:45:55Z) - Bounding Random Test Set Size with Computational Learning Theory [0.2999888908665658]
We show how probabilistic approaches to answer this question in Machine Learning can be applied in our testing context.
We are the first to enable this from only knowing the number of coverage targets in the source code.
We validate this bound on a large set of Java units, and an autonomous driving system.
arXiv Detail & Related papers (2024-05-27T10:15:16Z) - Observation-based unit test generation at Meta [52.4716552057909]
TestGen automatically generates unit tests, carved from serialized observations of complex objects, observed during app execution.
TestGen has landed 518 tests into production, which have been executed 9,617,349 times in continuous integration, finding 5,702 faults.
Our evaluation reveals that, when carving its observations from 4,361 reliable end-to-end tests, TestGen was able to generate tests for at least 86% of the classes covered by end-to-end tests.
arXiv Detail & Related papers (2024-02-09T00:34:39Z) - TestSpark: IntelliJ IDEA's Ultimate Test Generation Companion [15.13443954421825]
This paper introduces TestSpark, a plugin for IntelliJ IDEA that enables users to generate unit tests with only a few clicks.
TestSpark also allows users to easily modify and run each generated test and integrate them into the project workflow.
arXiv Detail & Related papers (2024-01-12T13:53:57Z) - Prompting Code Interpreter to Write Better Unit Tests on Quixbugs
Functions [0.05657375260432172]
Unit testing is a commonly-used approach in software engineering to test the correctness and robustness of written code.
In this study, we explore the effect of different prompts on the quality of unit tests generated by Code Interpreter.
We find that the quality of the generated unit tests is not sensitive to changes in minor details in the prompts provided.
arXiv Detail & Related papers (2023-09-30T20:36:23Z) - Towards Automatic Generation of Amplified Regression Test Oracles [44.45138073080198]
We propose a test oracle derivation approach to amplify regression test oracles.
The approach monitors the object state during test execution and compares it to the previous version to detect any changes in relation to the SUT's intended behaviour.
arXiv Detail & Related papers (2023-07-28T12:38:44Z) - Improved Compositional Generalization by Generating Demonstrations for
Meta-Learning [81.38804205212425]
We show substantially improved performance on a previously unsolved compositional behaviour split without a loss of performance on other splits.
In this case, searching for relevant demonstrations even with an oracle function is not sufficient to attain good performance when using meta-learning.
arXiv Detail & Related papers (2023-05-22T14:58:54Z) - ChatUniTest: A Framework for LLM-Based Test Generation [17.296369651892228]
This paper presents ChatUniTest, an automated unit test generation framework.
ChatUniTest incorporates an adaptive focal context mechanism to encompass valuable context in prompts.
Our effectiveness evaluation reveals that ChatUniTest outperforms TestSpark and EvoSuite in half of the evaluated projects.
arXiv Detail & Related papers (2023-05-08T15:12:07Z) - BiasTestGPT: Using ChatGPT for Social Bias Testing of Language Models [73.29106813131818]
bias testing is currently cumbersome since the test sentences are generated from a limited set of manual templates or need expensive crowd-sourcing.
We propose using ChatGPT for the controllable generation of test sentences, given any arbitrary user-specified combination of social groups and attributes.
We present an open-source comprehensive bias testing framework (BiasTestGPT), hosted on HuggingFace, that can be plugged into any open-source PLM for bias testing.
arXiv Detail & Related papers (2023-02-14T22:07:57Z) - CodeT: Code Generation with Generated Tests [49.622590050797236]
We explore the use of pre-trained language models to automatically generate test cases.
CodeT executes the code solutions using the generated test cases, and then chooses the best solution.
We evaluate CodeT on five different pre-trained models with both HumanEval and MBPP benchmarks.
arXiv Detail & Related papers (2022-07-21T10:18:37Z) - Beyond Accuracy: Behavioral Testing of NLP models with CheckList [66.42971817954806]
CheckList is a task-agnostic methodology for testing NLP models.
CheckList includes a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideation.
In a user study, NLP practitioners with CheckList created twice as many tests, and found almost three times as many bugs as users without it.
arXiv Detail & Related papers (2020-05-08T15:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.