GraphCodeBERT: Pre-training Code Representations with Data Flow
- URL: http://arxiv.org/abs/2009.08366v4
- Date: Mon, 13 Sep 2021 05:48:51 GMT
- Title: GraphCodeBERT: Pre-training Code Representations with Data Flow
- Authors: Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu,
Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, Michele Tufano, Shao
Kun Deng, Colin Clement, Dawn Drain, Neel Sundaresan, Jian Yin, Daxin Jiang,
Ming Zhou
- Abstract summary: We present GraphCodeBERT, a pre-trained model for programming language that considers the inherent structure of code.
We use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables.
We evaluate our model on four tasks, including code search, clone detection, code translation, and code refinement.
- Score: 97.00641522327699
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pre-trained models for programming language have achieved dramatic empirical
improvements on a variety of code-related tasks such as code search, code
completion, code summarization, etc. However, existing pre-trained models
regard a code snippet as a sequence of tokens, while ignoring the inherent
structure of code, which provides crucial code semantics and would enhance the
code understanding process. We present GraphCodeBERT, a pre-trained model for
programming language that considers the inherent structure of code. Instead of
taking syntactic-level structure of code like abstract syntax tree (AST), we
use data flow in the pre-training stage, which is a semantic-level structure of
code that encodes the relation of "where-the-value-comes-from" between
variables. Such a semantic-level structure is neat and does not bring an
unnecessarily deep hierarchy of AST, the property of which makes the model more
efficient. We develop GraphCodeBERT based on Transformer. In addition to using
the task of masked language modeling, we introduce two structure-aware
pre-training tasks. One is to predict code structure edges, and the other is to
align representations between source code and code structure. We implement the
model in an efficient way with a graph-guided masked attention function to
incorporate the code structure. We evaluate our model on four tasks, including
code search, clone detection, code translation, and code refinement. Results
show that code structure and newly introduced pre-training tasks can improve
GraphCodeBERT and achieves state-of-the-art performance on the four downstream
tasks. We further show that the model prefers structure-level attentions over
token-level attentions in the task of code search.
Related papers
- Code Execution with Pre-trained Language Models [88.04688617516827]
Most pre-trained models for code intelligence ignore the execution trace and only rely on source code and syntactic structures.
We develop a mutation-based data augmentation technique to create a large-scale and realistic Python dataset and task for code execution.
We then present CodeExecutor, a Transformer model that leverages code execution pre-training and curriculum learning to enhance its semantic comprehension.
arXiv Detail & Related papers (2023-05-08T10:00:05Z) - Unveiling Code Pre-Trained Models: Investigating Syntax and Semantics Capacities [34.27541293716398]
We extensively analyze seven code models to investigate how code models represent code syntax and semantics.
We have developed four probing tasks to evaluate the models' abilities to learn code syntax and semantics.
Our results emphasize the strengths and weaknesses of various code models in mastering code syntax and semantics.
arXiv Detail & Related papers (2022-12-20T06:15:17Z) - Soft-Labeled Contrastive Pre-training for Function-level Code
Representation [127.71430696347174]
We present textbfSCodeR, a textbfSoft-labeled contrastive pre-training framework with two positive sample construction methods.
Considering the relevance between codes in a large-scale code corpus, the soft-labeled contrastive pre-training can obtain fine-grained soft-labels.
SCodeR achieves new state-of-the-art performance on four code-related tasks over seven datasets.
arXiv Detail & Related papers (2022-10-18T05:17:37Z) - UniXcoder: Unified Cross-Modal Pre-training for Code Representation [65.6846553962117]
We present UniXcoder, a unified cross-modal pre-trained model for programming language.
We propose a one-to-one mapping method to transform AST in a sequence structure that retains all structural information from the tree.
We evaluate UniXcoder on five code-related tasks over nine datasets.
arXiv Detail & Related papers (2022-03-08T04:48:07Z) - What Do They Capture? -- A Structural Analysis of Pre-Trained Language
Models for Source Code [32.345301158791045]
Pre-trained language models for source code have been proposed to model the context of code.
These models leverage masked pre-training and Transformer.
It is not clear why these models work and what feature correlations they can capture.
arXiv Detail & Related papers (2022-02-14T16:22:10Z) - CodeRetriever: Unimodal and Bimodal Contrastive Learning [128.06072658302165]
We propose the CodeRetriever model, which combines the unimodal and bimodal contrastive learning to train function-level code semantic representations.
For unimodal contrastive learning, we design a semantic-guided method to build positive code pairs based on the documentation and function name.
For bimodal contrastive learning, we leverage the documentation and in-line comments of code to build text-code pairs.
arXiv Detail & Related papers (2022-01-26T10:54:30Z) - Contrastive Learning for Source Code with Structural and Functional
Properties [66.10710134948478]
We present BOOST, a novel self-supervised model to focus pre-training based on the characteristics of source code.
We employ automated, structure-guided code transformation algorithms that generate functionally equivalent code that looks drastically different from the original one.
We train our model in a way that brings the functionally equivalent code closer and distinct code further through a contrastive learning objective.
arXiv Detail & Related papers (2021-10-08T02:56:43Z) - What do pre-trained code models know about code? [9.60966128833701]
We use diagnostic tasks called probes to investigate pre-trained code models.
BERT (pre-trained on English), CodeBERT and CodeBERTa (pre-trained on source code, and natural language documentation), and GraphCodeBERT (pre-trained on source code with dataflow) are investigated.
arXiv Detail & Related papers (2021-08-25T16:20:17Z) - CLSEBERT: Contrastive Learning for Syntax Enhanced Code Pre-Trained
Model [23.947178895479464]
We propose CLSEBERT, a Constrastive Learning Framework for Syntax Enhanced Code Pre-Trained Model.
In the pre-training stage, we consider the code syntax and hierarchy contained in the Abstract Syntax Tree (AST)
We also introduce two novel pre-training objectives. One is to predict the edges between nodes in the abstract syntax tree, and the other is to predict the types of code tokens.
arXiv Detail & Related papers (2021-08-10T10:08:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.