INSPECT: Intrinsic and Systematic Probing Evaluation for Code
Transformers
- URL: http://arxiv.org/abs/2312.05092v1
- Date: Fri, 8 Dec 2023 15:21:54 GMT
- Title: INSPECT: Intrinsic and Systematic Probing Evaluation for Code
Transformers
- Authors: Anjan Karmakar, Romain Robbes
- Abstract summary: We use a framework to define 15 probing tasks that exercise surface, syntactic, structural and semantic characteristics of source code.
We probe 8 pre-trained source code models, as well as a natural language model (BERT) as our baseline.
We find that models that incorporate some structural information (such as GraphCodeBERT) have a better representation of source code characteristics.
- Score: 7.255653248042546
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pre-trained models of source code have recently been successfully applied to
a wide variety of Software Engineering tasks; they have also seen some
practical adoption in practice, e.g. for code completion. Yet, we still know
very little about what these pre-trained models learn about source code. In
this article, we use probing--simple diagnostic tasks that do not further train
the models--to discover to what extent pre-trained models learn about specific
aspects of source code. We use an extensible framework to define 15 probing
tasks that exercise surface, syntactic, structural and semantic characteristics
of source code. We probe 8 pre-trained source code models, as well as a natural
language model (BERT) as our baseline. We find that models that incorporate
some structural information (such as GraphCodeBERT) have a better
representation of source code characteristics. Surprisingly, we find that for
some probing tasks, BERT is competitive with the source code models, indicating
that there are ample opportunities to improve source-code specific pre-training
on the respective code characteristics. We encourage other researchers to
evaluate their models with our probing task suite, so that they may peer into
the hidden layers of the models and identify what intrinsic code
characteristics are encoded.
Related papers
- Does Your Neural Code Completion Model Use My Code? A Membership Inference Approach [66.51005288743153]
We investigate the legal and ethical issues of current neural code completion models.
We tailor a membership inference approach (termed CodeMI) that was originally crafted for classification tasks.
We evaluate the effectiveness of this adapted approach across a diverse array of neural code completion models.
arXiv Detail & Related papers (2024-04-22T15:54:53Z) - Code Execution with Pre-trained Language Models [88.04688617516827]
Most pre-trained models for code intelligence ignore the execution trace and only rely on source code and syntactic structures.
We develop a mutation-based data augmentation technique to create a large-scale and realistic Python dataset and task for code execution.
We then present CodeExecutor, a Transformer model that leverages code execution pre-training and curriculum learning to enhance its semantic comprehension.
arXiv Detail & Related papers (2023-05-08T10:00:05Z) - Enriching Source Code with Contextual Data for Code Completion Models:
An Empirical Study [4.438873396405334]
We aim to answer whether making code easier to understand through using contextual data improves the performance of pre-trained code language models for the task of code completion.
For comments, we find that the models perform better in the presence of multi-line comments.
arXiv Detail & Related papers (2023-04-24T17:09:14Z) - Towards Efficient Fine-tuning of Pre-trained Code Models: An
Experimental Study and Beyond [52.656743602538825]
Fine-tuning pre-trained code models incurs a large computational cost.
We conduct an experimental study to explore what happens to layer-wise pre-trained representations and their encoded code knowledge during fine-tuning.
We propose Telly to efficiently fine-tune pre-trained code models via layer freezing.
arXiv Detail & Related papers (2023-04-11T13:34:13Z) - CodeExp: Explanatory Code Document Generation [94.43677536210465]
Existing code-to-text generation models produce only high-level summaries of code.
We conduct a human study to identify the criteria for high-quality explanatory docstring for code.
We present a multi-stage fine-tuning strategy and baseline models for the task.
arXiv Detail & Related papers (2022-11-25T18:05:44Z) - Explainable AI for Pre-Trained Code Models: What Do They Learn? When
They Do Not Work? [4.573310303307945]
We study two recent large language models (LLMs) for code on a set of software engineering downstream tasks.
We identify what CodeBERT and GraphCodeBERT learn (put the highest attention on, in terms of source code token types) on these tasks.
We show some of the common patterns when the model does not work as expected and suggest recommendations.
arXiv Detail & Related papers (2022-11-23T10:07:20Z) - NatGen: Generative pre-training by "Naturalizing" source code [18.410818213965918]
We propose a new pre-training objective, "Naturalizing" of source code.
Unlike natural language, code's bimodal, dual-channel nature allows us to generate semantically equivalent code at scale.
We fine-tune our model in three generative Software Engineering tasks to achieve state-of-the-art performance rivaling CodeT5.
arXiv Detail & Related papers (2022-06-15T15:08:29Z) - Enhancing Semantic Code Search with Multimodal Contrastive Learning and
Soft Data Augmentation [50.14232079160476]
We propose a new approach with multimodal contrastive learning and soft data augmentation for code search.
We conduct extensive experiments to evaluate the effectiveness of our approach on a large-scale dataset with six programming languages.
arXiv Detail & Related papers (2022-04-07T08:49:27Z) - Probing Pretrained Models of Source Code [14.904366372190943]
General pretrained models have been shown to outperform task-specific models in many applications.
We show that pretrained models of code indeed contain information about code syntactic structure and correctness, the notions of identifiers, data flow and correctnesss, and natural language naming.
arXiv Detail & Related papers (2022-02-16T10:26:14Z) - Contrastive Learning for Source Code with Structural and Functional
Properties [66.10710134948478]
We present BOOST, a novel self-supervised model to focus pre-training based on the characteristics of source code.
We employ automated, structure-guided code transformation algorithms that generate functionally equivalent code that looks drastically different from the original one.
We train our model in a way that brings the functionally equivalent code closer and distinct code further through a contrastive learning objective.
arXiv Detail & Related papers (2021-10-08T02:56:43Z) - What do pre-trained code models know about code? [9.60966128833701]
We use diagnostic tasks called probes to investigate pre-trained code models.
BERT (pre-trained on English), CodeBERT and CodeBERTa (pre-trained on source code, and natural language documentation), and GraphCodeBERT (pre-trained on source code with dataflow) are investigated.
arXiv Detail & Related papers (2021-08-25T16:20:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.