CAT-LM: Training Language Models on Aligned Code And Tests
- URL: http://arxiv.org/abs/2310.01602v1
- Date: Mon, 2 Oct 2023 19:52:22 GMT
- Title: CAT-LM: Training Language Models on Aligned Code And Tests
- Authors: Nikitha Rao, Kush Jain, Uri Alon, Claire Le Goues, Vincent J.
Hellendoorn
- Abstract summary: Testing is an integral part of the software development process. Yet, writing tests is time-consuming and therefore often neglected.
We propose the Aligned Code And Tests Language Model (CAT-LM), a GPT-style language model with 2.7 Billion parameters, trained on a corpus of Python and Java projects.
- Score: 19.526181671936243
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Testing is an integral part of the software development process. Yet, writing
tests is time-consuming and therefore often neglected. Classical test
generation tools such as EvoSuite generate behavioral test suites by optimizing
for coverage, but tend to produce tests that are hard to understand. Language
models trained on code can generate code that is highly similar to that written
by humans, but current models are trained to generate each file separately, as
is standard practice in natural language processing, and thus fail to consider
the code-under-test context when producing a test file. In this work, we
propose the Aligned Code And Tests Language Model (CAT-LM), a GPT-style
language model with 2.7 Billion parameters, trained on a corpus of Python and
Java projects. We utilize a novel pretraining signal that explicitly considers
the mapping between code and test files when available. We also drastically
increase the maximum sequence length of inputs to 8,192 tokens, 4x more than
typical code generation models, to ensure that the code context is available to
the model when generating test code. We analyze its usefulness for realistic
applications, showing that sampling with filtering (e.g., by compilability,
coverage) allows it to efficiently produce tests that achieve coverage similar
to ones written by developers while resembling their writing style. By
utilizing the code context, CAT-LM generates more valid tests than even much
larger language models trained with more data (CodeGen 16B and StarCoder) and
substantially outperforms a recent test-specific model (TeCo) at test
completion. Overall, our work highlights the importance of incorporating
software-specific insights when training language models for code and paves the
way to more powerful automated test generation.
Related papers
- TestGenEval: A Real World Unit Test Generation and Test Completion Benchmark [24.14654309612826]
TestGenEval comprises 68,647 tests from 1,210 code and test file pairs across 11 well-maintained Python repositories.
It covers initial tests authoring, test suite completion, and code coverage improvements.
We evaluate several popular models, with sizes ranging from 7B to 405B parameters.
arXiv Detail & Related papers (2024-10-01T14:47:05Z) - Multi-language Unit Test Generation using LLMs [6.259245181881262]
We describe a generic pipeline that incorporates static analysis to guide LLMs in generating compilable and high-coverage test cases.
We show how the pipeline can be applied to different programming languages, specifically Java and Python, and to complex software requiring environment mocking.
Our results demonstrate that LLM-based test generation, when guided by static analysis, can be competitive with, and even outperform, state-of-the-art test-generation techniques in coverage achieved.
arXiv Detail & Related papers (2024-09-04T21:46:18Z) - TDD Without Tears: Towards Test Case Generation from Requirements
through Deep Reinforcement Learning [22.331330777536046]
Test-driven development (TDD) mandates writing test cases based on requirements before writing the actual code.
While writing test cases is the centerpiece of TDD, it is time-consuming, expensive, and often shunned by developers.
We introduce PyTester, a Text-to-Testcase generation approach that can automatically generate correct, executable, complete, and effective test cases.
arXiv Detail & Related papers (2024-01-15T10:21:58Z) - REST: Retrieval-Based Speculative Decoding [69.06115086237207]
We introduce Retrieval-Based Speculative Decoding (REST), a novel algorithm designed to speed up language model generation.
Unlike previous methods that rely on a draft language model for speculative decoding, REST harnesses the power of retrieval to generate draft tokens.
When benchmarked on 7B and 13B language models in a single-batch setting, REST achieves a significant speedup of 1.62X to 2.36X on code or text generation.
arXiv Detail & Related papers (2023-11-14T15:43:47Z) - Prompting Code Interpreter to Write Better Unit Tests on Quixbugs
Functions [0.05657375260432172]
Unit testing is a commonly-used approach in software engineering to test the correctness and robustness of written code.
In this study, we explore the effect of different prompts on the quality of unit tests generated by Code Interpreter.
We find that the quality of the generated unit tests is not sensitive to changes in minor details in the prompts provided.
arXiv Detail & Related papers (2023-09-30T20:36:23Z) - Code Execution with Pre-trained Language Models [88.04688617516827]
Most pre-trained models for code intelligence ignore the execution trace and only rely on source code and syntactic structures.
We develop a mutation-based data augmentation technique to create a large-scale and realistic Python dataset and task for code execution.
We then present CodeExecutor, a Transformer model that leverages code execution pre-training and curriculum learning to enhance its semantic comprehension.
arXiv Detail & Related papers (2023-05-08T10:00:05Z) - Using Large Language Models to Generate JUnit Tests: An Empirical Study [0.4788487793976782]
A code generation model generates code by taking a prompt from a code comment, existing code, or a combination of both.
We investigated how well three models (Codex, GPT-3.5-Turbo, and StarCoder) can generate unit tests.
We found that the Codex model achieved above 80% coverage for the HumanEval dataset, but no model had more than 2% coverage for the EvoSuite SF110 benchmark.
arXiv Detail & Related papers (2023-04-30T07:28:06Z) - CodeExp: Explanatory Code Document Generation [94.43677536210465]
Existing code-to-text generation models produce only high-level summaries of code.
We conduct a human study to identify the criteria for high-quality explanatory docstring for code.
We present a multi-stage fine-tuning strategy and baseline models for the task.
arXiv Detail & Related papers (2022-11-25T18:05:44Z) - Interactive Code Generation via Test-Driven User-Intent Formalization [60.90035204567797]
Large language models (LLMs) produce code from informal natural language (NL) intent.
It is hard to define a notion of correctness since natural language can be ambiguous and lacks a formal semantics.
We describe a language-agnostic abstract algorithm and a concrete implementation TiCoder.
arXiv Detail & Related papers (2022-08-11T17:41:08Z) - CodeT: Code Generation with Generated Tests [49.622590050797236]
We explore the use of pre-trained language models to automatically generate test cases.
CodeT executes the code solutions using the generated test cases, and then chooses the best solution.
We evaluate CodeT on five different pre-trained models with both HumanEval and MBPP benchmarks.
arXiv Detail & Related papers (2022-07-21T10:18:37Z) - ReACC: A Retrieval-Augmented Code Completion Framework [53.49707123661763]
We propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval.
We evaluate our approach in the code completion task in Python and Java programming languages, achieving a state-of-the-art performance on CodeXGLUE benchmark.
arXiv Detail & Related papers (2022-03-15T08:25:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.