On the Importance of Building High-quality Training Datasets for Neural
Code Search
- URL: http://arxiv.org/abs/2202.06649v1
- Date: Mon, 14 Feb 2022 12:02:41 GMT
- Title: On the Importance of Building High-quality Training Datasets for Neural
Code Search
- Authors: Zhensu Sun, Li Li, Yan Liu, Xiaoning Du, Li Li
- Abstract summary: We propose a data cleaning framework consisting of two subsequent filters: a rule-based syntactic filter and a model-based semantic filter.
We evaluate the effectiveness of our framework on two widely-used code search models and three manually-annotated code retrieval benchmarks.
- Score: 15.557818317497397
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The performance of neural code search is significantly influenced by the
quality of the training data from which the neural models are derived. A large
corpus of high-quality query and code pairs is demanded to establish a precise
mapping from the natural language to the programming language. Due to the
limited availability, most widely-used code search datasets are established
with compromise, such as using code comments as a replacement of queries. Our
empirical study on a famous code search dataset reveals that over one-third of
its queries contain noises that make them deviate from natural user queries.
Models trained through noisy data are faced with severe performance degradation
when applied in real-world scenarios. To improve the dataset quality and make
the queries of its samples semantically identical to real user queries is
critical for the practical usability of neural code search. In this paper, we
propose a data cleaning framework consisting of two subsequent filters: a
rule-based syntactic filter and a model-based semantic filter. This is the
first framework that applies semantic query cleaning to code search datasets.
Experimentally, we evaluated the effectiveness of our framework on two
widely-used code search models and three manually-annotated code retrieval
benchmarks. Training the popular DeepCS model with the filtered dataset from
our framework improves its performance by 19.2% MRR and 21.3% Answer@1, on
average with the three validation benchmarks.
Related papers
- CodeXEmbed: A Generalist Embedding Model Family for Multiligual and Multi-task Code Retrieval [103.116634967815]
We introduce CodeXEmbed, a family of large-scale code embedding models ranging from 400M to 7B parameters.
Our novel training pipeline unifies multiple programming languages and transforms various code-related tasks into a common retrieval framework.
Our 7B model sets a new state-of-the-art (SOTA) in code retrieval, outperforming the previous leading model, Voyage-Code, by over 20% on CoIR benchmark.
arXiv Detail & Related papers (2024-11-19T16:54:45Z) - Enhancing Legal Case Retrieval via Scaling High-quality Synthetic Query-Candidate Pairs [67.54302101989542]
Legal case retrieval aims to provide similar cases as references for a given fact description.
Existing works mainly focus on case-to-case retrieval using lengthy queries.
Data scale is insufficient to satisfy the training requirements of existing data-hungry neural models.
arXiv Detail & Related papers (2024-10-09T06:26:39Z) - How Do Your Code LLMs Perform? Empowering Code Instruction Tuning with High-Quality Data [26.836532205017104]
We find that many datasets suffer from severe data leakage.
This discovery reveals a new challenge: identifying which dataset genuinely qualify as high-quality code instruction data.
We present XCoder, a family of models finetuned from LLaMA3.
arXiv Detail & Related papers (2024-09-05T17:46:30Z) - ProCQA: A Large-scale Community-based Programming Question Answering Dataset for Code Search [8.700556381819267]
We introduce ProCQA, a large-scale programming question answering dataset extracted from the StackOverflow community.
We propose a modality-agnostic contrastive pre-training approach to improve the alignment of text and code representations of current code language models.
arXiv Detail & Related papers (2024-03-25T12:34:33Z) - LLM-Assisted Code Cleaning For Training Accurate Code Generators [53.087019724256606]
We investigate data quality for code and find that making the code more structured and readable leads to improved code generation performance of the system.
We build a novel data-cleaning pipeline that uses these principles to transform existing programs.
We evaluate our approach on two challenging algorithmic code generation benchmarks and find that fine-tuning CodeLLaMa-7B improves the performance by up to 30% compared to fine-tuning on the original dataset.
arXiv Detail & Related papers (2023-11-25T02:45:50Z) - Improving Code Search with Hard Negative Sampling Based on Fine-tuning [15.341959871682981]
We introduce a cross-encoder architecture for code search that jointly encodes the concatenation of query and code.
We also introduce a Retriever-Ranker (RR) framework that cascades the dual-encoder and cross-encoder to promote the efficiency of evaluation and online serving.
arXiv Detail & Related papers (2023-05-08T07:04:28Z) - Enhancing Semantic Code Search with Multimodal Contrastive Learning and
Soft Data Augmentation [50.14232079160476]
We propose a new approach with multimodal contrastive learning and soft data augmentation for code search.
We conduct extensive experiments to evaluate the effectiveness of our approach on a large-scale dataset with six programming languages.
arXiv Detail & Related papers (2022-04-07T08:49:27Z) - CoSQA: 20,000+ Web Queries for Code Search and Question Answering [63.92224685262063]
CoSQA dataset includes 20,604 labels for pairs of natural language queries and codes.
We introduce a contrastive learning method dubbed CoCLR to enhance query-code matching.
We show that evaluated on CodeXGLUE with the same CodeBERT model, training on CoSQA improves the accuracy of code question answering by 5.1%.
arXiv Detail & Related papers (2021-05-27T15:37:21Z) - Deep Graph Matching and Searching for Semantic Code Retrieval [76.51445515611469]
We propose an end-to-end deep graph matching and searching model based on graph neural networks.
We first represent both natural language query texts and programming language code snippets with the unified graph-structured data.
In particular, DGMS not only captures more structural information for individual query texts or code snippets but also learns the fine-grained similarity between them.
arXiv Detail & Related papers (2020-10-24T14:16:50Z) - Efficient Neural Query Auto Completion [17.58784759652327]
Three major challenges are observed for a query auto completion system.
Traditional QAC systems rely on handcrafted features such as the query candidate frequency in search logs.
We propose an efficient neural QAC system with effective context modeling to overcome these challenges.
arXiv Detail & Related papers (2020-08-06T21:28:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.