CODE-MVP: Learning to Represent Source Code from Multiple Views with
Contrastive Pre-Training
- URL: http://arxiv.org/abs/2205.02029v1
- Date: Wed, 4 May 2022 12:40:58 GMT
- Title: CODE-MVP: Learning to Represent Source Code from Multiple Views with
Contrastive Pre-Training
- Authors: Xin Wang, Yasheng Wang, Yao Wan, Jiawei Wang, Pingyi Zhou, Li Li, Hao
Wu and Jin Liu
- Abstract summary: We propose to integrate different views with the natural-language description of source code into a unified framework with Multi-View contrastive Pre-training.
Specifically, we first extract multiple code views using compiler tools, and learn the complementary information among them under a contrastive learning framework.
Experiments on three downstream tasks over five datasets demonstrate the superiority of CODE-MVP when compared with several state-of-the-art baselines.
- Score: 26.695345034376388
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have witnessed increasing interest in code representation
learning, which aims to represent the semantics of source code into distributed
vectors. Currently, various works have been proposed to represent the complex
semantics of source code from different views, including plain text, Abstract
Syntax Tree (AST), and several kinds of code graphs (e.g., Control/Data Flow
Graph). However, most of them only consider a single view of source code
independently, ignoring the correspondences among different views. In this
paper, we propose to integrate different views with the natural-language
description of source code into a unified framework with Multi-View contrastive
Pre-training, and name our model as CODE-MVP. Specifically, we first extract
multiple code views using compiler tools, and learn the complementary
information among them under a contrastive learning framework. Inspired by the
type checking in compilation, we also design a fine-grained type inference
objective in the pre-training. Experiments on three downstream tasks over five
datasets demonstrate the superiority of CODE-MVP when compared with several
state-of-the-art baselines. For example, we achieve 2.4/2.3/1.1 gain in terms
of MRR/MAP/Accuracy metrics on natural language code retrieval, code
similarity, and code defect detection tasks, respectively.
Related papers
- Abstract Syntax Tree for Programming Language Understanding and
Representation: How Far Are We? [23.52632194060246]
Programming language understanding and representation (a.k.a code representation learning) has always been a hot and challenging task in software engineering.
The abstract syntax tree (AST), a fundamental code feature, illustrates the syntactic information of the source code and has been widely used in code representation learning.
We compare the performance of models trained with code token sequence (Token for short) based code representation and AST-based code representation on three popular types of code-related tasks.
arXiv Detail & Related papers (2023-12-01T08:37:27Z) - Soft-Labeled Contrastive Pre-training for Function-level Code
Representation [127.71430696347174]
We present textbfSCodeR, a textbfSoft-labeled contrastive pre-training framework with two positive sample construction methods.
Considering the relevance between codes in a large-scale code corpus, the soft-labeled contrastive pre-training can obtain fine-grained soft-labels.
SCodeR achieves new state-of-the-art performance on four code-related tasks over seven datasets.
arXiv Detail & Related papers (2022-10-18T05:17:37Z) - Enhancing Semantic Code Search with Multimodal Contrastive Learning and
Soft Data Augmentation [50.14232079160476]
We propose a new approach with multimodal contrastive learning and soft data augmentation for code search.
We conduct extensive experiments to evaluate the effectiveness of our approach on a large-scale dataset with six programming languages.
arXiv Detail & Related papers (2022-04-07T08:49:27Z) - CodeRetriever: Unimodal and Bimodal Contrastive Learning [128.06072658302165]
We propose the CodeRetriever model, which combines the unimodal and bimodal contrastive learning to train function-level code semantic representations.
For unimodal contrastive learning, we design a semantic-guided method to build positive code pairs based on the documentation and function name.
For bimodal contrastive learning, we leverage the documentation and in-line comments of code to build text-code pairs.
arXiv Detail & Related papers (2022-01-26T10:54:30Z) - CLSEBERT: Contrastive Learning for Syntax Enhanced Code Pre-Trained
Model [23.947178895479464]
We propose CLSEBERT, a Constrastive Learning Framework for Syntax Enhanced Code Pre-Trained Model.
In the pre-training stage, we consider the code syntax and hierarchy contained in the Abstract Syntax Tree (AST)
We also introduce two novel pre-training objectives. One is to predict the edges between nodes in the abstract syntax tree, and the other is to predict the types of code tokens.
arXiv Detail & Related papers (2021-08-10T10:08:21Z) - Universal Representation for Code [8.978516631649276]
We present effective pre-training strategies on top of a novel graph-based code representation.
We pre-train graph neural networks on the representation to extract universal code properties.
We evaluate our model on two real-world datasets -- spanning over 30M Java methods and 770K Python methods.
arXiv Detail & Related papers (2021-03-04T15:39:25Z) - Deep Graph Matching and Searching for Semantic Code Retrieval [76.51445515611469]
We propose an end-to-end deep graph matching and searching model based on graph neural networks.
We first represent both natural language query texts and programming language code snippets with the unified graph-structured data.
In particular, DGMS not only captures more structural information for individual query texts or code snippets but also learns the fine-grained similarity between them.
arXiv Detail & Related papers (2020-10-24T14:16:50Z) - GraphCodeBERT: Pre-training Code Representations with Data Flow [97.00641522327699]
We present GraphCodeBERT, a pre-trained model for programming language that considers the inherent structure of code.
We use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables.
We evaluate our model on four tasks, including code search, clone detection, code translation, and code refinement.
arXiv Detail & Related papers (2020-09-17T15:25:56Z) - Contrastive Code Representation Learning [95.86686147053958]
We show that the popular reconstruction-based BERT model is sensitive to source code edits, even when the edits preserve semantics.
We propose ContraCode: a contrastive pre-training task that learns code functionality, not form.
arXiv Detail & Related papers (2020-07-09T17:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.