An Exploratory Study on Code Attention in BERT
- URL: http://arxiv.org/abs/2204.10200v1
- Date: Tue, 5 Apr 2022 21:23:10 GMT
- Title: An Exploratory Study on Code Attention in BERT
- Authors: Rishab Sharma and Fuxiang Chen and Fatemeh Fard and David Lo
- Abstract summary: We investigate the attention behavior of PLM on code and compare it with natural language.
We show that BERT pays more attention to syntactic entities, specifically identifiers and separators, in contrast to the most attended token in NLP.
The findings can benefit the research community by using code-specific representations instead of applying the common embeddings used in NLP.
- Score: 8.488193857572211
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many recent models in software engineering introduced deep neural models
based on the Transformer architecture or use transformer-based Pre-trained
Language Models (PLM) trained on code. Although these models achieve the state
of the arts results in many downstream tasks such as code summarization and bug
detection, they are based on Transformer and PLM, which are mainly studied in
the Natural Language Processing (NLP) field. The current studies rely on the
reasoning and practices from NLP for these models in code, despite the
differences between natural languages and programming languages. There is also
limited literature on explaining how code is modeled.
Here, we investigate the attention behavior of PLM on code and compare it
with natural language. We pre-trained BERT, a Transformer based PLM, on code
and explored what kind of information it learns, both semantic and syntactic.
We run several experiments to analyze the attention values of code constructs
on each other and what BERT learns in each layer. Our analyses show that BERT
pays more attention to syntactic entities, specifically identifiers and
separators, in contrast to the most attended token [CLS] in NLP. This
observation motivated us to leverage identifiers to represent the code sequence
instead of the [CLS] token when used for code clone detection. Our results show
that employing embeddings from identifiers increases the performance of BERT by
605% and 4% F1-score in its lower layers and the upper layers, respectively.
When identifiers' embeddings are used in CodeBERT, a code-based PLM, the
performance is improved by 21-24% in the F1-score of clone detection. The
findings can benefit the research community by using code-specific
representations instead of applying the common embeddings used in NLP, and open
new directions for developing smaller models with similar performance.
Related papers
- CodeSAM: Source Code Representation Learning by Infusing Self-Attention with Multi-Code-View Graphs [8.850533100643547]
We propose CodeSAM, a novel framework to infuse multiple code-views into transformer-based models by creating self-attention masks.
We use CodeSAM to fine-tune a small language model (SLM) like CodeBERT on the downstream SE tasks of semantic code search, code clone detection, and program classification.
arXiv Detail & Related papers (2024-11-21T22:24:47Z) - Crystal: Illuminating LLM Abilities on Language and Code [58.5467653736537]
We propose a pretraining strategy to enhance the integration of natural language and coding capabilities.
The resulting model, Crystal, demonstrates remarkable capabilities in both domains.
arXiv Detail & Related papers (2024-11-06T10:28:46Z) - Uncovering LLM-Generated Code: A Zero-Shot Synthetic Code Detector via Code Rewriting [78.48355455324688]
We propose a novel zero-shot synthetic code detector based on the similarity between the code and its rewritten variants.
Our results demonstrate a notable enhancement over existing synthetic content detectors designed for general texts.
arXiv Detail & Related papers (2024-05-25T08:57:28Z) - CodeIP: A Grammar-Guided Multi-Bit Watermark for Large Language Models of Code [56.019447113206006]
Large Language Models (LLMs) have achieved remarkable progress in code generation.
CodeIP is a novel multi-bit watermarking technique that embeds additional information to preserve provenance details.
Experiments conducted on a real-world dataset across five programming languages demonstrate the effectiveness of CodeIP.
arXiv Detail & Related papers (2024-04-24T04:25:04Z) - How to get better embeddings with code pre-trained models? An empirical
study [6.220333404184779]
We study five different code pre-trained models (PTMs) to generate embeddings for downstream classification tasks.
We find that embeddings obtained through special tokens do not sufficiently aggregate the semantic information of the entire code snippet.
The quality of code embeddings obtained by combing code data and text data in the same way as pre-training the PTMs is poor and cannot guarantee richer semantic information.
arXiv Detail & Related papers (2023-11-14T10:44:21Z) - Exploring Large Language Models for Code Explanation [3.2570216147409514]
Large Language Models (LLMs) have made remarkable strides in Natural Language Processing.
This study specifically delves into the task of generating natural-language summaries for code snippets, using various LLMs.
arXiv Detail & Related papers (2023-10-25T14:38:40Z) - Enhancing Semantic Code Search with Multimodal Contrastive Learning and
Soft Data Augmentation [50.14232079160476]
We propose a new approach with multimodal contrastive learning and soft data augmentation for code search.
We conduct extensive experiments to evaluate the effectiveness of our approach on a large-scale dataset with six programming languages.
arXiv Detail & Related papers (2022-04-07T08:49:27Z) - CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for
Code Understanding and Generation [36.47905744758698]
We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers.
Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning.
arXiv Detail & Related papers (2021-09-02T12:21:06Z) - CLSEBERT: Contrastive Learning for Syntax Enhanced Code Pre-Trained
Model [23.947178895479464]
We propose CLSEBERT, a Constrastive Learning Framework for Syntax Enhanced Code Pre-Trained Model.
In the pre-training stage, we consider the code syntax and hierarchy contained in the Abstract Syntax Tree (AST)
We also introduce two novel pre-training objectives. One is to predict the edges between nodes in the abstract syntax tree, and the other is to predict the types of code tokens.
arXiv Detail & Related papers (2021-08-10T10:08:21Z) - GraphCodeBERT: Pre-training Code Representations with Data Flow [97.00641522327699]
We present GraphCodeBERT, a pre-trained model for programming language that considers the inherent structure of code.
We use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables.
We evaluate our model on four tasks, including code search, clone detection, code translation, and code refinement.
arXiv Detail & Related papers (2020-09-17T15:25:56Z) - CodeBERT: A Pre-Trained Model for Programming and Natural Languages [117.34242908773061]
CodeBERT is a pre-trained model for programming language (PL) and nat-ural language (NL)
We develop CodeBERT with Transformer-based neural architecture.
We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters.
arXiv Detail & Related papers (2020-02-19T13:09:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.