Generation-Augmented Query Expansion For Code Retrieval
- URL: http://arxiv.org/abs/2212.10692v1
- Date: Tue, 20 Dec 2022 23:49:37 GMT
- Title: Generation-Augmented Query Expansion For Code Retrieval
- Authors: Dong Li and Yelong Shen and Ruoming Jin and Yi Mao and Kuan Wang and
Weizhu Chen
- Abstract summary: We propose a generation-augmented query expansion framework.
Inspired by the human retrieval process - sketching an answer before searching.
We achieve new state-of-the-art results on the CodeSearchNet benchmark.
- Score: 51.20943646688115
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pre-trained language models have achieved promising success in code retrieval
tasks, where a natural language documentation query is given to find the most
relevant existing code snippet. However, existing models focus only on
optimizing the documentation code pairs by embedding them into latent space,
without the association of external knowledge. In this paper, we propose a
generation-augmented query expansion framework. Inspired by the human retrieval
process - sketching an answer before searching, in this work, we utilize the
powerful code generation model to benefit the code retrieval task.
Specifically, we demonstrate that rather than merely retrieving the target code
snippet according to the documentation query, it would be helpful to augment
the documentation query with its generation counterpart - generated code
snippets from the code generation model. To the best of our knowledge, this is
the first attempt that leverages the code generation model to enhance the code
retrieval task. We achieve new state-of-the-art results on the CodeSearchNet
benchmark and surpass the baselines significantly.
Related papers
- Building A Coding Assistant via the Retrieval-Augmented Language Model [24.654428111628242]
We propose a retrieval-augmeNted language model (CONAN) to build a code assistant by mimicking the knowledge-seeking behaviors of humans during coding.
It consists of a code structure aware retriever (CONAN-R) and a dual-view code representation-based retrieval-augmented generation model (CONAN-G)
arXiv Detail & Related papers (2024-10-21T17:34:39Z) - CodeRAG-Bench: Can Retrieval Augment Code Generation? [78.37076502395699]
We conduct a systematic, large-scale analysis of code generation using retrieval-augmented generation.
We first curate a comprehensive evaluation benchmark, CodeRAG-Bench, encompassing three categories of code generation tasks.
We examine top-performing models on CodeRAG-Bench by providing contexts retrieved from one or multiple sources.
arXiv Detail & Related papers (2024-06-20T16:59:52Z) - CodeExp: Explanatory Code Document Generation [94.43677536210465]
Existing code-to-text generation models produce only high-level summaries of code.
We conduct a human study to identify the criteria for high-quality explanatory docstring for code.
We present a multi-stage fine-tuning strategy and baseline models for the task.
arXiv Detail & Related papers (2022-11-25T18:05:44Z) - Enhancing Semantic Code Search with Multimodal Contrastive Learning and
Soft Data Augmentation [50.14232079160476]
We propose a new approach with multimodal contrastive learning and soft data augmentation for code search.
We conduct extensive experiments to evaluate the effectiveness of our approach on a large-scale dataset with six programming languages.
arXiv Detail & Related papers (2022-04-07T08:49:27Z) - ReACC: A Retrieval-Augmented Code Completion Framework [53.49707123661763]
We propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval.
We evaluate our approach in the code completion task in Python and Java programming languages, achieving a state-of-the-art performance on CodeXGLUE benchmark.
arXiv Detail & Related papers (2022-03-15T08:25:08Z) - CodeRetriever: Unimodal and Bimodal Contrastive Learning [128.06072658302165]
We propose the CodeRetriever model, which combines the unimodal and bimodal contrastive learning to train function-level code semantic representations.
For unimodal contrastive learning, we design a semantic-guided method to build positive code pairs based on the documentation and function name.
For bimodal contrastive learning, we leverage the documentation and in-line comments of code to build text-code pairs.
arXiv Detail & Related papers (2022-01-26T10:54:30Z) - Retrieval Augmented Code Generation and Summarization [43.823483197436076]
We propose a retrieval augmented framework, tool, that retrieves relevant code or summaries from a retrieval database.
tool extends the state-of-the-art dense retrieval technique to search for relevant code or summaries.
We conduct experiments and extensive analysis on two benchmark datasets of code generation and summarization in Java and Python.
arXiv Detail & Related papers (2021-08-26T06:48:13Z) - BERT2Code: Can Pretrained Language Models be Leveraged for Code Search? [0.7953229555481884]
We show that our model learns the inherent relationship between the embedding spaces and further probes into the scope of improvement.
In this analysis, we show that the quality of the code embedding model is the bottleneck for our model's performance.
arXiv Detail & Related papers (2021-04-16T10:28:27Z) - Neural Code Search Revisited: Enhancing Code Snippet Retrieval through
Natural Language Intent [1.1168121941015012]
We study how code retrieval systems can be improved by leveraging descriptions to better capture the intents of code snippets.
Building on recent progress in transfer learning and natural language processing, we create a domain-specific retrieval model for code annotated with a natural language description.
arXiv Detail & Related papers (2020-08-27T15:39:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.