Contrastive Learning for Source Code with Structural and Functional
Properties
- URL: http://arxiv.org/abs/2110.03868v1
- Date: Fri, 8 Oct 2021 02:56:43 GMT
- Title: Contrastive Learning for Source Code with Structural and Functional
Properties
- Authors: Yangruibo Ding, Luca Buratti, Saurabh Pujar, Alessandro Morari,
Baishakhi Ray, Saikat Chakraborty
- Abstract summary: We present BOOST, a novel self-supervised model to focus pre-training based on the characteristics of source code.
We employ automated, structure-guided code transformation algorithms that generate functionally equivalent code that looks drastically different from the original one.
We train our model in a way that brings the functionally equivalent code closer and distinct code further through a contrastive learning objective.
- Score: 66.10710134948478
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pre-trained transformer models have recently shown promises for understanding
the source code. Most existing works expect to understand code from the textual
features and limited structural knowledge of code. However, the program
functionalities sometimes cannot be fully revealed by the code sequence, even
with structure information. Programs can contain very different tokens and
structures while sharing the same functionality, but changing only one or a few
code tokens can introduce unexpected or malicious program behaviors while
preserving the syntax and most tokens. In this work, we present BOOST, a novel
self-supervised model to focus pre-training based on the characteristics of
source code. We first employ automated, structure-guided code transformation
algorithms that generate (i.) functionally equivalent code that looks
drastically different from the original one, and (ii.) textually and
syntactically very similar code that is functionally distinct from the
original. We train our model in a way that brings the functionally equivalent
code closer and distinct code further through a contrastive learning objective.
To encode the structure information, we introduce a new node-type masked
language model objective that helps the model learn about structural context.
We pre-train BOOST with a much smaller dataset than the state-of-the-art
models, but our small models can still match or outperform these large models
in code understanding and generation tasks.
Related papers
- Toward Exploring the Code Understanding Capabilities of Pre-trained Code Generation Models [12.959392500354223]
We pioneer the transfer of knowledge from pre-trained code generation models to code understanding tasks.
We introduce CL4D, a contrastive learning method designed to enhance the representation capabilities of decoder-only models.
arXiv Detail & Related papers (2024-06-18T06:52:14Z) - SparseCoder: Identifier-Aware Sparse Transformer for File-Level Code
Summarization [51.67317895094664]
This paper studies file-level code summarization, which can assist programmers in understanding and maintaining large source code projects.
We propose SparseCoder, an identifier-aware sparse transformer for effectively handling long code sequences.
arXiv Detail & Related papers (2024-01-26T09:23:27Z) - Outline, Then Details: Syntactically Guided Coarse-To-Fine Code
Generation [61.50286000143233]
ChainCoder is a program synthesis language model that generates Python code progressively.
A tailored transformer architecture is leveraged to jointly encode the natural language descriptions and syntactically aligned I/O data samples.
arXiv Detail & Related papers (2023-04-28T01:47:09Z) - Unveiling Code Pre-Trained Models: Investigating Syntax and Semantics Capacities [34.27541293716398]
We extensively analyze seven code models to investigate how code models represent code syntax and semantics.
We have developed four probing tasks to evaluate the models' abilities to learn code syntax and semantics.
Our results emphasize the strengths and weaknesses of various code models in mastering code syntax and semantics.
arXiv Detail & Related papers (2022-12-20T06:15:17Z) - UniXcoder: Unified Cross-Modal Pre-training for Code Representation [65.6846553962117]
We present UniXcoder, a unified cross-modal pre-trained model for programming language.
We propose a one-to-one mapping method to transform AST in a sequence structure that retains all structural information from the tree.
We evaluate UniXcoder on five code-related tasks over nine datasets.
arXiv Detail & Related papers (2022-03-08T04:48:07Z) - What Do They Capture? -- A Structural Analysis of Pre-Trained Language
Models for Source Code [32.345301158791045]
Pre-trained language models for source code have been proposed to model the context of code.
These models leverage masked pre-training and Transformer.
It is not clear why these models work and what feature correlations they can capture.
arXiv Detail & Related papers (2022-02-14T16:22:10Z) - CodeRetriever: Unimodal and Bimodal Contrastive Learning [128.06072658302165]
We propose the CodeRetriever model, which combines the unimodal and bimodal contrastive learning to train function-level code semantic representations.
For unimodal contrastive learning, we design a semantic-guided method to build positive code pairs based on the documentation and function name.
For bimodal contrastive learning, we leverage the documentation and in-line comments of code to build text-code pairs.
arXiv Detail & Related papers (2022-01-26T10:54:30Z) - CLSEBERT: Contrastive Learning for Syntax Enhanced Code Pre-Trained
Model [23.947178895479464]
We propose CLSEBERT, a Constrastive Learning Framework for Syntax Enhanced Code Pre-Trained Model.
In the pre-training stage, we consider the code syntax and hierarchy contained in the Abstract Syntax Tree (AST)
We also introduce two novel pre-training objectives. One is to predict the edges between nodes in the abstract syntax tree, and the other is to predict the types of code tokens.
arXiv Detail & Related papers (2021-08-10T10:08:21Z) - GraphCodeBERT: Pre-training Code Representations with Data Flow [97.00641522327699]
We present GraphCodeBERT, a pre-trained model for programming language that considers the inherent structure of code.
We use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables.
We evaluate our model on four tasks, including code search, clone detection, code translation, and code refinement.
arXiv Detail & Related papers (2020-09-17T15:25:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.