COSEA: Convolutional Code Search with Layer-wise Attention
- URL: http://arxiv.org/abs/2010.09520v1
- Date: Mon, 19 Oct 2020 13:53:38 GMT
- Title: COSEA: Convolutional Code Search with Layer-wise Attention
- Authors: Hao Wang, Jia Zhang, Yingce Xia, Jiang Bian, Chao Zhang, Tie-Yan Liu
- Abstract summary: We propose a new deep learning architecture, COSEA, which leverages convolutional neural networks with layer-wise attention to capture the code's intrinsic structural logic.
COSEA can achieve significant improvements over state-of-the-art methods on code search tasks.
- Score: 90.35777733464354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic code search, which aims to retrieve code snippets relevant to a
given natural language query, has attracted many research efforts with the
purpose of accelerating software development. The huge amount of online
publicly available code repositories has prompted the employment of deep
learning techniques to build state-of-the-art code search models. Particularly,
they leverage deep neural networks to embed codes and queries into a unified
semantic vector space and then use the similarity between code's and query's
vectors to approximate the semantic correlation between code and the query.
However, most existing studies overlook the code's intrinsic structural logic,
which indeed contains a wealth of semantic information, and fails to capture
intrinsic features of codes. In this paper, we propose a new deep learning
architecture, COSEA, which leverages convolutional neural networks with
layer-wise attention to capture the valuable code's intrinsic structural logic.
To further increase the learning efficiency of COSEA, we propose a variant of
contrastive loss for training the code search model, where the ground-truth
code should be distinguished from the most similar negative sample. We have
implemented a prototype of COSEA. Extensive experiments over existing public
datasets of Python and SQL have demonstrated that COSEA can achieve significant
improvements over state-of-the-art methods on code search tasks.
Related papers
- Comments as Natural Logic Pivots: Improve Code Generation via Comment Perspective [85.48043537327258]
We propose MANGO (comMents As Natural loGic pivOts), including a comment contrastive training strategy and a corresponding logical comment decoding strategy.
Results indicate that MANGO significantly improves the code pass rate based on the strong baselines.
The robustness of the logical comment decoding strategy is notably higher than the Chain-of-thoughts prompting.
arXiv Detail & Related papers (2024-04-11T08:30:46Z) - Survey of Code Search Based on Deep Learning [11.94599964179766]
This survey focuses on code search, that is, to retrieve code that matches a given query.
Deep learning, being able to extract complex semantics information, has achieved great success in this field.
We propose a new taxonomy to illustrate the state-of-the-art deep learning-based code search.
arXiv Detail & Related papers (2023-05-10T08:07:04Z) - Enhancing Semantic Code Search with Multimodal Contrastive Learning and
Soft Data Augmentation [50.14232079160476]
We propose a new approach with multimodal contrastive learning and soft data augmentation for code search.
We conduct extensive experiments to evaluate the effectiveness of our approach on a large-scale dataset with six programming languages.
arXiv Detail & Related papers (2022-04-07T08:49:27Z) - ReACC: A Retrieval-Augmented Code Completion Framework [53.49707123661763]
We propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval.
We evaluate our approach in the code completion task in Python and Java programming languages, achieving a state-of-the-art performance on CodeXGLUE benchmark.
arXiv Detail & Related papers (2022-03-15T08:25:08Z) - Learning Deep Semantic Model for Code Search using CodeSearchNet Corpus [17.6095840480926]
We propose a novel deep semantic model which makes use of the utilities of multi-modal sources.
We apply the proposed model to tackle the CodeSearchNet challenge about semantic code search.
Our model is trained on CodeSearchNet corpus and evaluated on the held-out data, the final model achieves 0.384 NDCG and won the first place in this benchmark.
arXiv Detail & Related papers (2022-01-27T04:15:59Z) - CodeRetriever: Unimodal and Bimodal Contrastive Learning [128.06072658302165]
We propose the CodeRetriever model, which combines the unimodal and bimodal contrastive learning to train function-level code semantic representations.
For unimodal contrastive learning, we design a semantic-guided method to build positive code pairs based on the documentation and function name.
For bimodal contrastive learning, we leverage the documentation and in-line comments of code to build text-code pairs.
arXiv Detail & Related papers (2022-01-26T10:54:30Z) - BERT2Code: Can Pretrained Language Models be Leveraged for Code Search? [0.7953229555481884]
We show that our model learns the inherent relationship between the embedding spaces and further probes into the scope of improvement.
In this analysis, we show that the quality of the code embedding model is the bottleneck for our model's performance.
arXiv Detail & Related papers (2021-04-16T10:28:27Z) - GraphCodeBERT: Pre-training Code Representations with Data Flow [97.00641522327699]
We present GraphCodeBERT, a pre-trained model for programming language that considers the inherent structure of code.
We use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables.
We evaluate our model on four tasks, including code search, clone detection, code translation, and code refinement.
arXiv Detail & Related papers (2020-09-17T15:25:56Z) - CoNCRA: A Convolutional Neural Network Code Retrieval Approach [0.0]
We propose a technique for semantic code search: A Convolutional Neural Network approach to code retrieval.
Our technique aims to find the code snippet that most closely matches the developer's intent, expressed in natural language.
We evaluated our approach's efficacy on a dataset composed of questions and code snippets collected from Stack Overflow.
arXiv Detail & Related papers (2020-09-03T23:38:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.