Soft-Labeled Contrastive Pre-training for Function-level Code
Representation
- URL: http://arxiv.org/abs/2210.09597v1
- Date: Tue, 18 Oct 2022 05:17:37 GMT
- Title: Soft-Labeled Contrastive Pre-training for Function-level Code
Representation
- Authors: Xiaonan Li, Daya Guo, Yeyun Gong, Yun Lin, Yelong Shen, Xipeng Qiu,
Daxin Jiang, Weizhu Chen and Nan Duan
- Abstract summary: We present textbfSCodeR, a textbfSoft-labeled contrastive pre-training framework with two positive sample construction methods.
Considering the relevance between codes in a large-scale code corpus, the soft-labeled contrastive pre-training can obtain fine-grained soft-labels.
SCodeR achieves new state-of-the-art performance on four code-related tasks over seven datasets.
- Score: 127.71430696347174
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Code contrastive pre-training has recently achieved significant progress on
code-related tasks. In this paper, we present \textbf{SCodeR}, a
\textbf{S}oft-labeled contrastive pre-training framework with two positive
sample construction methods to learn functional-level \textbf{Code}
\textbf{R}epresentation. Considering the relevance between codes in a
large-scale code corpus, the soft-labeled contrastive pre-training can obtain
fine-grained soft-labels through an iterative adversarial manner and use them
to learn better code representation. The positive sample construction is
another key for contrastive pre-training. Previous works use
transformation-based methods like variable renaming to generate semantically
equal positive codes. However, they usually result in the generated code with a
highly similar surface form, and thus mislead the model to focus on superficial
code structure instead of code semantics. To encourage SCodeR to capture
semantic information from the code, we utilize code comments and abstract
syntax sub-trees of the code to build positive samples. We conduct experiments
on four code-related tasks over seven datasets. Extensive experimental results
show that SCodeR achieves new state-of-the-art performance on all of them,
which illustrates the effectiveness of the proposed pre-training method.
Related papers
- Enhancing Semantic Code Search with Multimodal Contrastive Learning and
Soft Data Augmentation [50.14232079160476]
We propose a new approach with multimodal contrastive learning and soft data augmentation for code search.
We conduct extensive experiments to evaluate the effectiveness of our approach on a large-scale dataset with six programming languages.
arXiv Detail & Related papers (2022-04-07T08:49:27Z) - ReACC: A Retrieval-Augmented Code Completion Framework [53.49707123661763]
We propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval.
We evaluate our approach in the code completion task in Python and Java programming languages, achieving a state-of-the-art performance on CodeXGLUE benchmark.
arXiv Detail & Related papers (2022-03-15T08:25:08Z) - CodeRetriever: Unimodal and Bimodal Contrastive Learning [128.06072658302165]
We propose the CodeRetriever model, which combines the unimodal and bimodal contrastive learning to train function-level code semantic representations.
For unimodal contrastive learning, we design a semantic-guided method to build positive code pairs based on the documentation and function name.
For bimodal contrastive learning, we leverage the documentation and in-line comments of code to build text-code pairs.
arXiv Detail & Related papers (2022-01-26T10:54:30Z) - CLSEBERT: Contrastive Learning for Syntax Enhanced Code Pre-Trained
Model [23.947178895479464]
We propose CLSEBERT, a Constrastive Learning Framework for Syntax Enhanced Code Pre-Trained Model.
In the pre-training stage, we consider the code syntax and hierarchy contained in the Abstract Syntax Tree (AST)
We also introduce two novel pre-training objectives. One is to predict the edges between nodes in the abstract syntax tree, and the other is to predict the types of code tokens.
arXiv Detail & Related papers (2021-08-10T10:08:21Z) - GraphCodeBERT: Pre-training Code Representations with Data Flow [97.00641522327699]
We present GraphCodeBERT, a pre-trained model for programming language that considers the inherent structure of code.
We use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables.
We evaluate our model on four tasks, including code search, clone detection, code translation, and code refinement.
arXiv Detail & Related papers (2020-09-17T15:25:56Z) - Self-Supervised Contrastive Learning for Code Retrieval and
Summarization via Semantic-Preserving Transformations [28.61567319928316]
Corder is a self-supervised contrastive learning framework for source code model.
Key innovation is that we train the source code model by asking it to recognize similar and dissimilar code snippets.
We have shown that the code models pretrained by Corder substantially outperform the other baselines for code-to-code retrieval, text-to-code retrieval, and code-to-text summarization tasks.
arXiv Detail & Related papers (2020-09-06T13:31:16Z) - Contrastive Code Representation Learning [95.86686147053958]
We show that the popular reconstruction-based BERT model is sensitive to source code edits, even when the edits preserve semantics.
We propose ContraCode: a contrastive pre-training task that learns code functionality, not form.
arXiv Detail & Related papers (2020-07-09T17:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.