CodeFill: Multi-token Code Completion by Jointly Learning from Structure
and Naming Sequences
- URL: http://arxiv.org/abs/2202.06689v1
- Date: Mon, 14 Feb 2022 13:26:54 GMT
- Title: CodeFill: Multi-token Code Completion by Jointly Learning from Structure
and Naming Sequences
- Authors: Maliheh Izadi, Roberta Gismondi, Georgios Gousios
- Abstract summary: We present CodeFill, a language model for autocompletion that combines learned structure and naming information.
CodeFill is trained both for single-token and multi-token (statement) prediction.
To make the evaluation more realistic, we develop a method to automatically infer points in the source code at which completion matters.
- Score: 7.661675959139121
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Code completion is an essential feature of IDEs, yet current autocompleters
are restricted to either grammar-based or NLP-based single token completions.
Both approaches have significant drawbacks: grammar-based autocompletion is
restricted in dynamically-typed language environments, whereas NLP-based
autocompleters struggle to understand the semantics of the programming language
and the developer's code context.
In this work, we present CodeFill, a language model for autocompletion that
combines learned structure and naming information. Using a parallel Transformer
architecture and multi-task learning, CodeFill consumes sequences of source
code token names and their equivalent AST token types. Uniquely, CodeFill is
trained both for single-token and multi-token (statement) prediction, which
enables it to learn long-range dependencies among grammatical and naming
elements. We train CodeFill on two datasets, consisting of 29M and 425M lines
of code, respectively. To make the evaluation more realistic, we develop a
method to automatically infer points in the source code at which completion
matters. We compare CodeFill against four baselines and two state-of-the-art
models, GPT-C and TravTrans+.CodeFill surpasses all baselines in single token
prediction (MRR: 70.9% vs. 66.2% and 67.8%) and outperforms the state of the
art for multi-token prediction (ROUGE-L: 63.7% vs. 52.4% and 59.2%, for n=4
tokens). We publicly release our source code and datasets.
Related papers
- Superposed Decoding: Multiple Generations from a Single Autoregressive Inference Pass [72.07642648108849]
Superposed Decoding is a new decoding algorithm that generates $k$ drafts at the cost of one autoregressive inference pass.
Superposed Decoding can be combined with other decoding strategies, resulting in universal coverage gains when scaling inference time compute.
arXiv Detail & Related papers (2024-05-28T17:40:48Z) - LongCoder: A Long-Range Pre-trained Language Model for Code Completion [56.813974784131624]
LongCoder employs a sliding window mechanism for self-attention and introduces two types of globally accessible tokens.
Bridge tokens are inserted throughout the input sequence to aggregate local information and facilitate global interaction.
memory tokens are included to highlight important statements that may be invoked later and need to be memorized.
arXiv Detail & Related papers (2023-06-26T17:59:24Z) - Outline, Then Details: Syntactically Guided Coarse-To-Fine Code
Generation [61.50286000143233]
ChainCoder is a program synthesis language model that generates Python code progressively.
A tailored transformer architecture is leveraged to jointly encode the natural language descriptions and syntactically aligned I/O data samples.
arXiv Detail & Related papers (2023-04-28T01:47:09Z) - Syntax-Aware On-the-Fly Code Completion [13.268277642411974]
We propose PyCoder to leverage token types, a kind of lightweight syntactic information.
Our PyCoder achieves the first rank on the CodeXGLUE leaderboard with an accuracy of 77.12% for the token-level predictions.
arXiv Detail & Related papers (2022-11-09T04:24:18Z) - LAMNER: Code Comment Generation Using Character Language Model and Named
Entity Recognition [0.7894331610810762]
We present LAnguage Model and Named Entity Recognition (LAMNER)
LAMNER is a code comment generator capable of encoding code constructs effectively and capturing the structural property of a code token.
We evaluate the generated comments from LAMNER and other baselines on a popular Java dataset with four commonly used metrics.
arXiv Detail & Related papers (2022-04-05T20:53:06Z) - ReACC: A Retrieval-Augmented Code Completion Framework [53.49707123661763]
We propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval.
We evaluate our approach in the code completion task in Python and Java programming languages, achieving a state-of-the-art performance on CodeXGLUE benchmark.
arXiv Detail & Related papers (2022-03-15T08:25:08Z) - CodeRetriever: Unimodal and Bimodal Contrastive Learning [128.06072658302165]
We propose the CodeRetriever model, which combines the unimodal and bimodal contrastive learning to train function-level code semantic representations.
For unimodal contrastive learning, we design a semantic-guided method to build positive code pairs based on the documentation and function name.
For bimodal contrastive learning, we leverage the documentation and in-line comments of code to build text-code pairs.
arXiv Detail & Related papers (2022-01-26T10:54:30Z) - CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for
Code Understanding and Generation [36.47905744758698]
We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers.
Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning.
arXiv Detail & Related papers (2021-09-02T12:21:06Z) - CLSEBERT: Contrastive Learning for Syntax Enhanced Code Pre-Trained
Model [23.947178895479464]
We propose CLSEBERT, a Constrastive Learning Framework for Syntax Enhanced Code Pre-Trained Model.
In the pre-training stage, we consider the code syntax and hierarchy contained in the Abstract Syntax Tree (AST)
We also introduce two novel pre-training objectives. One is to predict the edges between nodes in the abstract syntax tree, and the other is to predict the types of code tokens.
arXiv Detail & Related papers (2021-08-10T10:08:21Z) - GraphCodeBERT: Pre-training Code Representations with Data Flow [97.00641522327699]
We present GraphCodeBERT, a pre-trained model for programming language that considers the inherent structure of code.
We use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables.
We evaluate our model on four tasks, including code search, clone detection, code translation, and code refinement.
arXiv Detail & Related papers (2020-09-17T15:25:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.