Code Generation Tools (Almost) for Free? A Study of Few-Shot,
Pre-Trained Language Models on Code
- URL: http://arxiv.org/abs/2206.01335v1
- Date: Thu, 2 Jun 2022 23:15:42 GMT
- Title: Code Generation Tools (Almost) for Free? A Study of Few-Shot,
Pre-Trained Language Models on Code
- Authors: Patrick Barei{\ss}, Beatriz Souza, Marcelo d'Amorim, Michael Pradel
- Abstract summary: Few-shot learning with large-scale, pre-trained language models is a powerful way to answer questions about code.
This paper studies to what extent a state-of-the-art, pre-trained language model of code, Codex, may serve this purpose.
- Score: 13.15617135394116
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot learning with large-scale, pre-trained language models is a powerful
way to answer questions about code, e.g., how to complete a given code example,
or even generate code snippets from scratch. The success of these models raises
the question whether they could serve as a basis for building a wide range code
generation tools. Traditionally, such tools are built manually and separately
for each task. Instead, few-shot learning may allow to obtain different tools
from a single pre-trained language model by simply providing a few examples or
a natural language description of the expected tool behavior. This paper
studies to what extent a state-of-the-art, pre-trained language model of code,
Codex, may serve this purpose. We consider three code manipulation and code
generation tasks targeted by a range of traditional tools: (i) code mutation;
(ii) test oracle generation from natural language documentation; and (iii) test
case generation. For each task, we compare few-shot learning to a manually
built tool. Our results show that the model-based tools complement (code
mutation), are on par (test oracle generation), or even outperform their
respective traditionally built tool (test case generation), while imposing far
less effort to develop them. By comparing the effectiveness of different
variants of the model-based tools, we provide insights on how to design an
appropriate input ("prompt") to the model and what influence the size of the
model has. For example, we find that providing a small natural language
description of the code generation task is an easy way to improve predictions.
Overall, we conclude that few-shot language models are surprisingly effective,
yet there is still more work to be done, such as exploring more diverse ways of
prompting and tackling even more involved tasks.
Related papers
- Curriculum Learning for Small Code Language Models [0.09999629695552192]
This paper explores the potential of curriculum learning in enhancing the performance of code language models.
We demonstrate that a well-designed curriculum learning approach significantly improves the accuracy of small decoder-only code language models.
arXiv Detail & Related papers (2024-07-14T13:32:24Z) - CodeGRAG: Bridging the Gap between Natural Language and Programming Language via Graphical Retrieval Augmented Generation [58.84212778960507]
We propose CodeGRAG, a Graphical Retrieval Augmented Code Generation framework to enhance the performance of LLMs.
CodeGRAG builds the graphical view of code blocks based on the control flow and data flow of them to fill the gap between programming languages and natural language.
Various experiments and ablations are done on four datasets including both the C++ and python languages to validate the hard meta-graph prompt, the soft prompting technique, and the effectiveness of the objectives for pretrained GNN expert.
arXiv Detail & Related papers (2024-05-03T02:48:55Z) - Code Execution with Pre-trained Language Models [88.04688617516827]
Most pre-trained models for code intelligence ignore the execution trace and only rely on source code and syntactic structures.
We develop a mutation-based data augmentation technique to create a large-scale and realistic Python dataset and task for code execution.
We then present CodeExecutor, a Transformer model that leverages code execution pre-training and curriculum learning to enhance its semantic comprehension.
arXiv Detail & Related papers (2023-05-08T10:00:05Z) - Enriching Source Code with Contextual Data for Code Completion Models:
An Empirical Study [4.438873396405334]
We aim to answer whether making code easier to understand through using contextual data improves the performance of pre-trained code language models for the task of code completion.
For comments, we find that the models perform better in the presence of multi-line comments.
arXiv Detail & Related papers (2023-04-24T17:09:14Z) - Toolformer: Language Models Can Teach Themselves to Use Tools [62.04867424598204]
Language models (LMs) exhibit remarkable abilities to solve new tasks from just a few examples or textual instructions, especially at scale.
We show that LMs can teach themselves to use external tools via simple APIs and achieve the best of both worlds.
We introduce Toolformer, a model trained to decide which APIs to call, when to call them, what arguments to pass, and how to best incorporate the results into future token prediction.
arXiv Detail & Related papers (2023-02-09T16:49:57Z) - Multi-lingual Evaluation of Code Generation Models [82.7357812992118]
We present new benchmarks on evaluation code generation models: MBXP and Multilingual HumanEval, and MathQA-X.
These datasets cover over 10 programming languages.
We are able to assess the performance of code generation models in a multi-lingual fashion.
arXiv Detail & Related papers (2022-10-26T17:17:06Z) - Multitask Prompted Training Enables Zero-Shot Task Generalization [70.12770442071657]
We develop a system for mapping general natural language tasks into a human-readable prompted form.
We fine-tune a pretrained encoder-decoder model on this multitask mixture covering a wide variety of tasks.
The model attains strong zero-shot performance on several standard datasets, often outperforming models 16x its size.
arXiv Detail & Related papers (2021-10-15T17:08:57Z) - Can Machines Read Coding Manuals Yet? -- A Benchmark for Building Better
Language Models for Code Understanding [3.98345038769576]
We derive a set of benchmarks that assess code understanding based on tasks such as predicting the best answer to a question in a forum post.
We evaluate the performance of current state-of-the-art language models on these tasks and show that there is a significant improvement on each task from fine tuning.
arXiv Detail & Related papers (2021-09-15T17:42:44Z) - The Turking Test: Can Language Models Understand Instructions? [45.266428794559495]
We present the Turking Test, which examines a model's ability to follow natural language instructions of varying complexity.
Despite our lenient evaluation methodology, we observe that a large pretrained language model performs poorly across all tasks.
arXiv Detail & Related papers (2020-10-22T18:44:16Z) - Exploring Versatile Generative Language Model Via Parameter-Efficient
Transfer Learning [70.81910984985683]
We propose an effective way to fine-tune multiple down-stream generation tasks simultaneously using a single, large pre-trained model.
The experiments on five diverse language generation tasks show that by just using an additional 2-3% parameters for each task, our model can maintain or even improve the performance of fine-tuning the whole model.
arXiv Detail & Related papers (2020-04-08T06:18:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.