CLSEBERT: Contrastive Learning for Syntax Enhanced Code Pre-Trained
Model
- URL: http://arxiv.org/abs/2108.04556v1
- Date: Tue, 10 Aug 2021 10:08:21 GMT
- Title: CLSEBERT: Contrastive Learning for Syntax Enhanced Code Pre-Trained
Model
- Authors: Xin Wang, Yasheng Wang, Pingyi Zhou, Meng Xiao, Yadao Wang, Li Li,
Xiao Liu, Hao Wu, Jin Liu, Xin Jiang
- Abstract summary: We propose CLSEBERT, a Constrastive Learning Framework for Syntax Enhanced Code Pre-Trained Model.
In the pre-training stage, we consider the code syntax and hierarchy contained in the Abstract Syntax Tree (AST)
We also introduce two novel pre-training objectives. One is to predict the edges between nodes in the abstract syntax tree, and the other is to predict the types of code tokens.
- Score: 23.947178895479464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pre-trained models for programming languages have proven their significant
values in various code-related tasks, such as code search, code clone
detection, and code translation. Currently, most pre-trained models treat a
code snippet as a sequence of tokens or only focus on the data flow between
code identifiers. However, rich code syntax and hierarchy are ignored which can
provide important structure information and semantic rules of codes to help
enhance code representations. In addition, although the BERT-based code
pre-trained models achieve high performance on many downstream tasks, the
native derived sequence representations of BERT are proven to be of
low-quality, it performs poorly on code matching and similarity tasks. To
address these problems, we propose CLSEBERT, a Constrastive Learning Framework
for Syntax Enhanced Code Pre-Trained Model, to deal with various code
intelligence tasks. In the pre-training stage, we consider the code syntax and
hierarchy contained in the Abstract Syntax Tree (AST) and leverage the
constrastive learning to learn noise-invariant code representations. Besides
the masked language modeling (MLM), we also introduce two novel pre-training
objectives. One is to predict the edges between nodes in the abstract syntax
tree, and the other is to predict the types of code tokens. Through extensive
experiments on four code intelligence tasks, we successfully show the
effectiveness of our proposed model.
Related papers
- Code Execution with Pre-trained Language Models [88.04688617516827]
Most pre-trained models for code intelligence ignore the execution trace and only rely on source code and syntactic structures.
We develop a mutation-based data augmentation technique to create a large-scale and realistic Python dataset and task for code execution.
We then present CodeExecutor, a Transformer model that leverages code execution pre-training and curriculum learning to enhance its semantic comprehension.
arXiv Detail & Related papers (2023-05-08T10:00:05Z) - Unveiling Code Pre-Trained Models: Investigating Syntax and Semantics Capacities [34.27541293716398]
We extensively analyze seven code models to investigate how code models represent code syntax and semantics.
We have developed four probing tasks to evaluate the models' abilities to learn code syntax and semantics.
Our results emphasize the strengths and weaknesses of various code models in mastering code syntax and semantics.
arXiv Detail & Related papers (2022-12-20T06:15:17Z) - Enhancing Semantic Code Search with Multimodal Contrastive Learning and
Soft Data Augmentation [50.14232079160476]
We propose a new approach with multimodal contrastive learning and soft data augmentation for code search.
We conduct extensive experiments to evaluate the effectiveness of our approach on a large-scale dataset with six programming languages.
arXiv Detail & Related papers (2022-04-07T08:49:27Z) - UniXcoder: Unified Cross-Modal Pre-training for Code Representation [65.6846553962117]
We present UniXcoder, a unified cross-modal pre-trained model for programming language.
We propose a one-to-one mapping method to transform AST in a sequence structure that retains all structural information from the tree.
We evaluate UniXcoder on five code-related tasks over nine datasets.
arXiv Detail & Related papers (2022-03-08T04:48:07Z) - What Do They Capture? -- A Structural Analysis of Pre-Trained Language
Models for Source Code [32.345301158791045]
Pre-trained language models for source code have been proposed to model the context of code.
These models leverage masked pre-training and Transformer.
It is not clear why these models work and what feature correlations they can capture.
arXiv Detail & Related papers (2022-02-14T16:22:10Z) - CodeRetriever: Unimodal and Bimodal Contrastive Learning [128.06072658302165]
We propose the CodeRetriever model, which combines the unimodal and bimodal contrastive learning to train function-level code semantic representations.
For unimodal contrastive learning, we design a semantic-guided method to build positive code pairs based on the documentation and function name.
For bimodal contrastive learning, we leverage the documentation and in-line comments of code to build text-code pairs.
arXiv Detail & Related papers (2022-01-26T10:54:30Z) - Contrastive Learning for Source Code with Structural and Functional
Properties [66.10710134948478]
We present BOOST, a novel self-supervised model to focus pre-training based on the characteristics of source code.
We employ automated, structure-guided code transformation algorithms that generate functionally equivalent code that looks drastically different from the original one.
We train our model in a way that brings the functionally equivalent code closer and distinct code further through a contrastive learning objective.
arXiv Detail & Related papers (2021-10-08T02:56:43Z) - CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for
Code Understanding and Generation [36.47905744758698]
We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers.
Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning.
arXiv Detail & Related papers (2021-09-02T12:21:06Z) - InferCode: Self-Supervised Learning of Code Representations by
Predicting Subtrees [17.461451218469062]
This paper proposes InferCode to overcome the limitation by adapting the self-language learning mechanism to build source code model.
Subtrees in ASTs are treated with InferCode as the labels for training code representations without any human labeling effort or the overhead of expensive graph construction.
Compared to previous code learning techniques applied to the same downstream tasks, such as Code2Vec, Code2Seq, ASTNN, higher performance results are achieved using our pre-trained InferCode model.
arXiv Detail & Related papers (2020-12-13T10:33:41Z) - GraphCodeBERT: Pre-training Code Representations with Data Flow [97.00641522327699]
We present GraphCodeBERT, a pre-trained model for programming language that considers the inherent structure of code.
We use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables.
We evaluate our model on four tasks, including code search, clone detection, code translation, and code refinement.
arXiv Detail & Related papers (2020-09-17T15:25:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.