CAT-LM: Training Language Models on Aligned Code And Tests
- URL: http://arxiv.org/abs/2310.01602v1
- Date: Mon, 2 Oct 2023 19:52:22 GMT
- Title: CAT-LM: Training Language Models on Aligned Code And Tests
- Authors: Nikitha Rao, Kush Jain, Uri Alon, Claire Le Goues, Vincent J.
Hellendoorn
- Abstract summary: Testing is an integral part of the software development process. Yet, writing tests is time-consuming and therefore often neglected.
We propose the Aligned Code And Tests Language Model (CAT-LM), a GPT-style language model with 2.7 Billion parameters, trained on a corpus of Python and Java projects.
- Score: 19.526181671936243
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Testing is an integral part of the software development process. Yet, writing
tests is time-consuming and therefore often neglected. Classical test
generation tools such as EvoSuite generate behavioral test suites by optimizing
for coverage, but tend to produce tests that are hard to understand. Language
models trained on code can generate code that is highly similar to that written
by humans, but current models are trained to generate each file separately, as
is standard practice in natural language processing, and thus fail to consider
the code-under-test context when producing a test file. In this work, we
propose the Aligned Code And Tests Language Model (CAT-LM), a GPT-style
language model with 2.7 Billion parameters, trained on a corpus of Python and
Java projects. We utilize a novel pretraining signal that explicitly considers
the mapping between code and test files when available. We also drastically
increase the maximum sequence length of inputs to 8,192 tokens, 4x more than
typical code generation models, to ensure that the code context is available to
the model when generating test code. We analyze its usefulness for realistic
applications, showing that sampling with filtering (e.g., by compilability,
coverage) allows it to efficiently produce tests that achieve coverage similar
to ones written by developers while resembling their writing style. By
utilizing the code context, CAT-LM generates more valid tests than even much
larger language models trained with more data (CodeGen 16B and StarCoder) and
substantially outperforms a recent test-specific model (TeCo) at test
completion. Overall, our work highlights the importance of incorporating
software-specific insights when training language models for code and paves the
way to more powerful automated test generation.
Related papers
- CLOVER: A Test Case Generation Benchmark with Coverage, Long-Context, and Verification [71.34070740261072]
This paper presents a benchmark, CLOVER, to evaluate models' capabilities in generating and completing test cases.
The benchmark is containerized for code execution across tasks, and we will release the code, data, and construction methodologies.
arXiv Detail & Related papers (2025-02-12T21:42:56Z) - GenX: Mastering Code and Test Generation with Execution Feedback [7.225594526057816]
We propose a novel approach that concurrently trains a code generation model and a test generation model.
We introduce two strategies for test and code data augmentation and a new scoring function for code and test ranking.
The results demonstrate that our models, when iteratively trained with an increasing number of test cases and code solutions, outperform those trained on the original dataset.
arXiv Detail & Related papers (2024-12-18T03:18:21Z) - Commit0: Library Generation from Scratch [77.38414688148006]
Commit0 is a benchmark that challenges AI agents to write libraries from scratch.
Agents are provided with a specification document outlining the library's API as well as a suite of interactive unit tests.
Commit0 also offers an interactive environment where models receive static analysis and execution feedback on the code they generate.
arXiv Detail & Related papers (2024-12-02T18:11:30Z) - TestGenEval: A Real World Unit Test Generation and Test Completion Benchmark [24.14654309612826]
TestGenEval comprises 68,647 tests from 1,210 code and test file pairs across 11 well-maintained Python repositories.
It covers initial tests authoring, test suite completion, and code coverage improvements.
We evaluate several popular models, with sizes ranging from 7B to 405B parameters.
arXiv Detail & Related papers (2024-10-01T14:47:05Z) - PyTester: Deep Reinforcement Learning for Text-to-Testcase Generation [20.441921569948562]
Test-driven development (TDD) mandates writing test cases based on requirements before writing the actual code.
While writing test cases is the centerpiece of TDD, it is time-consuming, expensive, and often shunned by developers.
We introduce PyTester, a Text-to-Testcase generation approach that can automatically generate correct, executable, complete, and effective test cases.
arXiv Detail & Related papers (2024-01-15T10:21:58Z) - REST: Retrieval-Based Speculative Decoding [69.06115086237207]
We introduce Retrieval-Based Speculative Decoding (REST), a novel algorithm designed to speed up language model generation.
Unlike previous methods that rely on a draft language model for speculative decoding, REST harnesses the power of retrieval to generate draft tokens.
When benchmarked on 7B and 13B language models in a single-batch setting, REST achieves a significant speedup of 1.62X to 2.36X on code or text generation.
arXiv Detail & Related papers (2023-11-14T15:43:47Z) - Prompting Code Interpreter to Write Better Unit Tests on Quixbugs
Functions [0.05657375260432172]
Unit testing is a commonly-used approach in software engineering to test the correctness and robustness of written code.
In this study, we explore the effect of different prompts on the quality of unit tests generated by Code Interpreter.
We find that the quality of the generated unit tests is not sensitive to changes in minor details in the prompts provided.
arXiv Detail & Related papers (2023-09-30T20:36:23Z) - Using Large Language Models to Generate JUnit Tests: An Empirical Study [0.4788487793976782]
A code generation model generates code by taking a prompt from a code comment, existing code, or a combination of both.
We investigated how well three models (Codex, GPT-3.5-Turbo, and StarCoder) can generate unit tests.
We found that the Codex model achieved above 80% coverage for the HumanEval dataset, but no model had more than 2% coverage for the EvoSuite SF110 benchmark.
arXiv Detail & Related papers (2023-04-30T07:28:06Z) - CodeExp: Explanatory Code Document Generation [94.43677536210465]
Existing code-to-text generation models produce only high-level summaries of code.
We conduct a human study to identify the criteria for high-quality explanatory docstring for code.
We present a multi-stage fine-tuning strategy and baseline models for the task.
arXiv Detail & Related papers (2022-11-25T18:05:44Z) - CodeT: Code Generation with Generated Tests [49.622590050797236]
We explore the use of pre-trained language models to automatically generate test cases.
CodeT executes the code solutions using the generated test cases, and then chooses the best solution.
We evaluate CodeT on five different pre-trained models with both HumanEval and MBPP benchmarks.
arXiv Detail & Related papers (2022-07-21T10:18:37Z) - ReACC: A Retrieval-Augmented Code Completion Framework [53.49707123661763]
We propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval.
We evaluate our approach in the code completion task in Python and Java programming languages, achieving a state-of-the-art performance on CodeXGLUE benchmark.
arXiv Detail & Related papers (2022-03-15T08:25:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.