Leveraging Code Generation to Improve Code Retrieval and Summarization
via Dual Learning
- URL: http://arxiv.org/abs/2002.10198v2
- Date: Tue, 25 Feb 2020 08:49:11 GMT
- Title: Leveraging Code Generation to Improve Code Retrieval and Summarization
via Dual Learning
- Authors: Wei Ye, Rui Xie, Jinglei Zhang, Tianxiang Hu, Xiaoyin Wang, Shikun
Zhang
- Abstract summary: Code summarization generates brief natural language description given a source code snippet, while code retrieval fetches relevant source code given a natural language query.
Recent studies have combined these two tasks to improve their performance.
We propose a novel end-to-end model for the two tasks by introducing an additional code generation task.
- Score: 18.354352985591305
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Code summarization generates brief natural language description given a
source code snippet, while code retrieval fetches relevant source code given a
natural language query. Since both tasks aim to model the association between
natural language and programming language, recent studies have combined these
two tasks to improve their performance. However, researchers have yet been able
to effectively leverage the intrinsic connection between the two tasks as they
train these tasks in a separate or pipeline manner, which means their
performance can not be well balanced. In this paper, we propose a novel
end-to-end model for the two tasks by introducing an additional code generation
task. More specifically, we explicitly exploit the probabilistic correlation
between code summarization and code generation with dual learning, and utilize
the two encoders for code summarization and code generation to train the code
retrieval task via multi-task learning. We have carried out extensive
experiments on an existing dataset of SQL and Python, and results show that our
model can significantly improve the results of the code retrieval task over
the-state-of-art models, as well as achieve competitive performance in terms of
BLEU score for the code summarization task.
Related papers
- CodeXEmbed: A Generalist Embedding Model Family for Multiligual and Multi-task Code Retrieval [103.116634967815]
We introduce CodeXEmbed, a family of large-scale code embedding models ranging from 400M to 7B parameters.
Our novel training pipeline unifies multiple programming languages and transforms various code-related tasks into a common retrieval framework.
Our 7B model sets a new state-of-the-art (SOTA) in code retrieval, outperforming the previous leading model, Voyage-Code, by over 20% on CoIR benchmark.
arXiv Detail & Related papers (2024-11-19T16:54:45Z) - Generation-Augmented Query Expansion For Code Retrieval [51.20943646688115]
We propose a generation-augmented query expansion framework.
Inspired by the human retrieval process - sketching an answer before searching.
We achieve new state-of-the-art results on the CodeSearchNet benchmark.
arXiv Detail & Related papers (2022-12-20T23:49:37Z) - Enhancing Semantic Code Search with Multimodal Contrastive Learning and
Soft Data Augmentation [50.14232079160476]
We propose a new approach with multimodal contrastive learning and soft data augmentation for code search.
We conduct extensive experiments to evaluate the effectiveness of our approach on a large-scale dataset with six programming languages.
arXiv Detail & Related papers (2022-04-07T08:49:27Z) - ReACC: A Retrieval-Augmented Code Completion Framework [53.49707123661763]
We propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval.
We evaluate our approach in the code completion task in Python and Java programming languages, achieving a state-of-the-art performance on CodeXGLUE benchmark.
arXiv Detail & Related papers (2022-03-15T08:25:08Z) - CodeRetriever: Unimodal and Bimodal Contrastive Learning [128.06072658302165]
We propose the CodeRetriever model, which combines the unimodal and bimodal contrastive learning to train function-level code semantic representations.
For unimodal contrastive learning, we design a semantic-guided method to build positive code pairs based on the documentation and function name.
For bimodal contrastive learning, we leverage the documentation and in-line comments of code to build text-code pairs.
arXiv Detail & Related papers (2022-01-26T10:54:30Z) - GraphCodeBERT: Pre-training Code Representations with Data Flow [97.00641522327699]
We present GraphCodeBERT, a pre-trained model for programming language that considers the inherent structure of code.
We use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables.
We evaluate our model on four tasks, including code search, clone detection, code translation, and code refinement.
arXiv Detail & Related papers (2020-09-17T15:25:56Z) - Self-Supervised Contrastive Learning for Code Retrieval and
Summarization via Semantic-Preserving Transformations [28.61567319928316]
Corder is a self-supervised contrastive learning framework for source code model.
Key innovation is that we train the source code model by asking it to recognize similar and dissimilar code snippets.
We have shown that the code models pretrained by Corder substantially outperform the other baselines for code-to-code retrieval, text-to-code retrieval, and code-to-text summarization tasks.
arXiv Detail & Related papers (2020-09-06T13:31:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.