Enriching Source Code with Contextual Data for Code Completion Models:
An Empirical Study
- URL: http://arxiv.org/abs/2304.12269v1
- Date: Mon, 24 Apr 2023 17:09:14 GMT
- Title: Enriching Source Code with Contextual Data for Code Completion Models:
An Empirical Study
- Authors: Tim van Dam, Maliheh Izadi, Arie van Deursen
- Abstract summary: We aim to answer whether making code easier to understand through using contextual data improves the performance of pre-trained code language models for the task of code completion.
For comments, we find that the models perform better in the presence of multi-line comments.
- Score: 4.438873396405334
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Transformer-based pre-trained models have recently achieved great results in
solving many software engineering tasks including automatic code completion
which is a staple in a developer's toolkit. While many have striven to improve
the code-understanding abilities of such models, the opposite -- making the
code easier to understand -- has not been properly investigated. In this study,
we aim to answer whether making code easier to understand through using
contextual data improves the performance of pre-trained code language models
for the task of code completion. We consider type annotations and comments as
two common forms of additional contextual information that often help
developers understand code better. For the experiments, we study code
completion in two granularity levels; token and line completion and take three
recent and large-scale language models for source code: UniXcoder, CodeGPT, and
InCoder with five evaluation metrics. Finally, we perform the Wilcoxon Signed
Rank test to gauge significance and measure the effect size. Contrary to our
expectations, all models perform better if type annotations are removed (albeit
the effect sizes are small). For comments, we find that the models perform
better in the presence of multi-line comments (again with small effect sizes).
Based on our observations, we recommend making proper design choices when
training, fine-tuning, or simply selecting such models given the intended data
and application. Better evaluations and multi-modal techniques can also be
further investigated to improve the practicality and accuracy of
auto-completions.
Related papers
- Does Your Neural Code Completion Model Use My Code? A Membership Inference Approach [66.51005288743153]
We investigate the legal and ethical issues of current neural code completion models.
We tailor a membership inference approach (termed CodeMI) that was originally crafted for classification tasks.
We evaluate the effectiveness of this adapted approach across a diverse array of neural code completion models.
arXiv Detail & Related papers (2024-04-22T15:54:53Z) - Code Representation Learning At Scale [75.04686476303436]
We fuel code representation learning with a vast amount of code data via a two-stage pretraining scheme.
We first train the encoders via a mix that leverages both randomness in masking language modeling and the structure aspect of programming language.
We then enhance the representations via contrastive learning with hard negative and hard positive constructed in an unsupervised manner.
arXiv Detail & Related papers (2024-02-02T22:19:15Z) - Towards Efficient Fine-tuning of Pre-trained Code Models: An
Experimental Study and Beyond [52.656743602538825]
Fine-tuning pre-trained code models incurs a large computational cost.
We conduct an experimental study to explore what happens to layer-wise pre-trained representations and their encoded code knowledge during fine-tuning.
We propose Telly to efficiently fine-tune pre-trained code models via layer freezing.
arXiv Detail & Related papers (2023-04-11T13:34:13Z) - CodeExp: Explanatory Code Document Generation [94.43677536210465]
Existing code-to-text generation models produce only high-level summaries of code.
We conduct a human study to identify the criteria for high-quality explanatory docstring for code.
We present a multi-stage fine-tuning strategy and baseline models for the task.
arXiv Detail & Related papers (2022-11-25T18:05:44Z) - Enhancing Semantic Code Search with Multimodal Contrastive Learning and
Soft Data Augmentation [50.14232079160476]
We propose a new approach with multimodal contrastive learning and soft data augmentation for code search.
We conduct extensive experiments to evaluate the effectiveness of our approach on a large-scale dataset with six programming languages.
arXiv Detail & Related papers (2022-04-07T08:49:27Z) - CodeRetriever: Unimodal and Bimodal Contrastive Learning [128.06072658302165]
We propose the CodeRetriever model, which combines the unimodal and bimodal contrastive learning to train function-level code semantic representations.
For unimodal contrastive learning, we design a semantic-guided method to build positive code pairs based on the documentation and function name.
For bimodal contrastive learning, we leverage the documentation and in-line comments of code to build text-code pairs.
arXiv Detail & Related papers (2022-01-26T10:54:30Z) - What do pre-trained code models know about code? [9.60966128833701]
We use diagnostic tasks called probes to investigate pre-trained code models.
BERT (pre-trained on English), CodeBERT and CodeBERTa (pre-trained on source code, and natural language documentation), and GraphCodeBERT (pre-trained on source code with dataflow) are investigated.
arXiv Detail & Related papers (2021-08-25T16:20:17Z) - Towards Full-line Code Completion with Neural Language Models [25.458883198815393]
We discuss the probability of directly completing a whole line of code instead of a single token.
Recent neural language models have been adopted as a preferred approach for code completion.
arXiv Detail & Related papers (2020-09-18T03:12:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.