An Empirical Evaluation of Using Large Language Models for Automated
Unit Test Generation
- URL: http://arxiv.org/abs/2302.06527v4
- Date: Mon, 11 Dec 2023 11:50:51 GMT
- Title: An Empirical Evaluation of Using Large Language Models for Automated
Unit Test Generation
- Authors: Max Sch\"afer, Sarah Nadi, Aryaz Eghbali, Frank Tip
- Abstract summary: This paper presents a large-scale empirical evaluation on the effectiveness of Large Language Models for automated unit test generation.
We implement our approach in TestPilot, a test generation tool for JavaScript that automatically generates unit tests for all API functions in an npm package.
We find that 92.8% of TestPilot's generated tests have no more than 50% similarity with existing tests.
- Score: 3.9762912548964864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unit tests play a key role in ensuring the correctness of software. However,
manually creating unit tests is a laborious task, motivating the need for
automation. Large Language Models (LLMs) have recently been applied to this
problem, utilizing additional training or few-shot learning on examples of
existing tests. This paper presents a large-scale empirical evaluation on the
effectiveness of LLMs for automated unit test generation without additional
training or manual effort, providing the LLM with the signature and
implementation of the function under test, along with usage examples extracted
from documentation. We also attempt to repair failed generated tests by
re-prompting the model with the failing test and error message. We implement
our approach in TestPilot, a test generation tool for JavaScript that
automatically generates unit tests for all API functions in an npm package. We
evaluate TestPilot using OpenAI's gpt3.5-turbo LLM on 25 npm packages with a
total of 1,684 API functions. The generated tests achieve a median statement
coverage of 70.2% and branch coverage of 52.8%, significantly improving on
Nessie, a recent feedback-directed JavaScript test generation technique, which
achieves only 51.3% statement coverage and 25.6% branch coverage. We also find
that 92.8% of TestPilot's generated tests have no more than 50% similarity with
existing tests (as measured by normalized edit distance), with none of them
being exact copies. Finally, we run TestPilot with two additional LLMs,
OpenAI's older code-cushman-002 LLM and the open LLM StarCoder. Overall, we
observed similar results with the former (68.2% median statement coverage), and
somewhat worse results with the latter (54.0% median statement coverage),
suggesting that the effectiveness of the approach is influenced by the size and
training set of the LLM, but does not fundamentally depend on the specific
model.
Related papers
- Model Equality Testing: Which Model Is This API Serving? [59.005869726179455]
We formalize detecting such distortions as Model Equality Testing, a two-sample testing problem.
A test built on a simple string kernel achieves a median of 77.4% power against a range of distortions.
We then apply this test to commercial inference APIs for four Llama models, finding that 11 out of 31 endpoints serve different distributions than reference weights released by Meta.
arXiv Detail & Related papers (2024-10-26T18:34:53Z) - TestGenEval: A Real World Unit Test Generation and Test Completion Benchmark [24.14654309612826]
TestGenEval comprises 68,647 tests from 1,210 code and test file pairs across 11 well-maintained Python repositories.
It covers initial tests authoring, test suite completion, and code coverage improvements.
We evaluate several popular models, with sizes ranging from 7B to 405B parameters.
arXiv Detail & Related papers (2024-10-01T14:47:05Z) - Improving LLM-based Unit test generation via Template-based Repair [8.22619177301814]
Unit test is crucial for detecting bugs in individual program units but consumes time and effort.
Large language models (LLMs) have demonstrated remarkable reasoning and generation capabilities.
In this paper, we propose TestART, a novel unit test generation method.
arXiv Detail & Related papers (2024-08-06T10:52:41Z) - STAMP: Outlier-Aware Test-Time Adaptation with Stable Memory Replay [76.06127233986663]
Test-time adaptation (TTA) aims to address the distribution shift between the training and test data with only unlabeled data at test time.
This paper pays attention to the problem that conducts both sample recognition and outlier rejection during inference while outliers exist.
We propose a new approach called STAble Memory rePlay (STAMP), which performs optimization over a stable memory bank instead of the risky mini-batch.
arXiv Detail & Related papers (2024-07-22T16:25:41Z) - CasModaTest: A Cascaded and Model-agnostic Self-directed Framework for Unit Test Generation [5.450831103980871]
CasModaTest is a cascaded, model-agnostic, and end-to-end unit test generation framework.
It generates test prefixes and test oracles and compiles or executes them to check their effectiveness.
arXiv Detail & Related papers (2024-06-22T05:52:39Z) - LLM-Powered Test Case Generation for Detecting Tricky Bugs [30.82169191775785]
AID generates test inputs and oracles targeting plausibly correct programs.
We evaluate AID on two large-scale datasets with tricky bugs: TrickyBugs and EvalPlus.
The evaluation results show that the recall, precision, and F1 score of AID outperform the state-of-the-art by up to 1.80x, 2.65x, and 1.66x, respectively.
arXiv Detail & Related papers (2024-04-16T06:20:06Z) - GPT-HateCheck: Can LLMs Write Better Functional Tests for Hate Speech Detection? [50.53312866647302]
HateCheck is a suite for testing fine-grained model functionalities on synthesized data.
We propose GPT-HateCheck, a framework to generate more diverse and realistic functional tests from scratch.
Crowd-sourced annotation demonstrates that the generated test cases are of high quality.
arXiv Detail & Related papers (2024-02-23T10:02:01Z) - Automated Unit Test Improvement using Large Language Models at Meta [44.87533111512982]
This paper describes Meta's TestGen-LLM tool, which uses LLMs to automatically improve existing human-written tests.
We describe the deployment of TestGen-LLM at Meta test-a-thons for the Instagram and Facebook platforms.
arXiv Detail & Related papers (2024-02-14T13:43:14Z) - Observation-based unit test generation at Meta [52.4716552057909]
TestGen automatically generates unit tests, carved from serialized observations of complex objects, observed during app execution.
TestGen has landed 518 tests into production, which have been executed 9,617,349 times in continuous integration, finding 5,702 faults.
Our evaluation reveals that, when carving its observations from 4,361 reliable end-to-end tests, TestGen was able to generate tests for at least 86% of the classes covered by end-to-end tests.
arXiv Detail & Related papers (2024-02-09T00:34:39Z) - AdaNPC: Exploring Non-Parametric Classifier for Test-Time Adaptation [64.9230895853942]
Domain generalization can be arbitrarily hard without exploiting target domain information.
Test-time adaptive (TTA) methods are proposed to address this issue.
In this work, we adopt Non-Parametric to perform the test-time Adaptation (AdaNPC)
arXiv Detail & Related papers (2023-04-25T04:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.