LAMNER: Code Comment Generation Using Character Language Model and Named
Entity Recognition
- URL: http://arxiv.org/abs/2204.09654v1
- Date: Tue, 5 Apr 2022 20:53:06 GMT
- Title: LAMNER: Code Comment Generation Using Character Language Model and Named
Entity Recognition
- Authors: Rishab Sharma and Fuxiang Chen and Fatemeh Fard
- Abstract summary: We present LAnguage Model and Named Entity Recognition (LAMNER)
LAMNER is a code comment generator capable of encoding code constructs effectively and capturing the structural property of a code token.
We evaluate the generated comments from LAMNER and other baselines on a popular Java dataset with four commonly used metrics.
- Score: 0.7894331610810762
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Code comment generation is the task of generating a high-level natural
language description for a given code method or function. Although researchers
have been studying multiple ways to generate code comments automatically,
previous work mainly considers representing a code token in its entirety
semantics form only (e.g., a language model is used to learn the semantics of a
code token), and additional code properties such as the tree structure of a
code are included as an auxiliary input to the model. There are two
limitations: 1) Learning the code token in its entirety form may not be able to
capture information succinctly in source code, and 2) The code token does not
contain additional syntactic information, inherently important in programming
languages.
In this paper, we present LAnguage Model and Named Entity Recognition
(LAMNER), a code comment generator capable of encoding code constructs
effectively and capturing the structural property of a code token. A
character-level language model is used to learn the semantic representation to
encode a code token. For the structural property of a token, a Named Entity
Recognition model is trained to learn the different types of code tokens. These
representations are then fed into an encoder-decoder architecture to generate
code comments. We evaluate the generated comments from LAMNER and other
baselines on a popular Java dataset with four commonly used metrics. Our
results show that LAMNER is effective and improves over the best baseline model
in BLEU-1, BLEU-2, BLEU-3, BLEU-4, ROUGE-L, METEOR, and CIDEr by 14.34%,
18.98%, 21.55%, 23.00%, 10.52%, 1.44%, and 25.86%, respectively. Additionally,
we fused LAMNER's code representation with the baseline models, and the fused
models consistently showed improvement over the non-fused models. The human
evaluation further shows that LAMNER produces high-quality code comments.
Related papers
- Code Execution with Pre-trained Language Models [88.04688617516827]
Most pre-trained models for code intelligence ignore the execution trace and only rely on source code and syntactic structures.
We develop a mutation-based data augmentation technique to create a large-scale and realistic Python dataset and task for code execution.
We then present CodeExecutor, a Transformer model that leverages code execution pre-training and curriculum learning to enhance its semantic comprehension.
arXiv Detail & Related papers (2023-05-08T10:00:05Z) - Outline, Then Details: Syntactically Guided Coarse-To-Fine Code
Generation [61.50286000143233]
ChainCoder is a program synthesis language model that generates Python code progressively.
A tailored transformer architecture is leveraged to jointly encode the natural language descriptions and syntactically aligned I/O data samples.
arXiv Detail & Related papers (2023-04-28T01:47:09Z) - CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code [75.08995072899594]
We propose CodeBERTScore: an evaluation metric for code generation.
CodeBERTScore encodes the natural language input preceding the generated code.
We find that CodeBERTScore achieves a higher correlation with human preference and with functional correctness than all existing metrics.
arXiv Detail & Related papers (2023-02-10T22:12:05Z) - Unveiling Code Pre-Trained Models: Investigating Syntax and Semantics Capacities [34.27541293716398]
We extensively analyze seven code models to investigate how code models represent code syntax and semantics.
We have developed four probing tasks to evaluate the models' abilities to learn code syntax and semantics.
Our results emphasize the strengths and weaknesses of various code models in mastering code syntax and semantics.
arXiv Detail & Related papers (2022-12-20T06:15:17Z) - CodeRetriever: Unimodal and Bimodal Contrastive Learning [128.06072658302165]
We propose the CodeRetriever model, which combines the unimodal and bimodal contrastive learning to train function-level code semantic representations.
For unimodal contrastive learning, we design a semantic-guided method to build positive code pairs based on the documentation and function name.
For bimodal contrastive learning, we leverage the documentation and in-line comments of code to build text-code pairs.
arXiv Detail & Related papers (2022-01-26T10:54:30Z) - Contrastive Learning for Source Code with Structural and Functional
Properties [66.10710134948478]
We present BOOST, a novel self-supervised model to focus pre-training based on the characteristics of source code.
We employ automated, structure-guided code transformation algorithms that generate functionally equivalent code that looks drastically different from the original one.
We train our model in a way that brings the functionally equivalent code closer and distinct code further through a contrastive learning objective.
arXiv Detail & Related papers (2021-10-08T02:56:43Z) - CLSEBERT: Contrastive Learning for Syntax Enhanced Code Pre-Trained
Model [23.947178895479464]
We propose CLSEBERT, a Constrastive Learning Framework for Syntax Enhanced Code Pre-Trained Model.
In the pre-training stage, we consider the code syntax and hierarchy contained in the Abstract Syntax Tree (AST)
We also introduce two novel pre-training objectives. One is to predict the edges between nodes in the abstract syntax tree, and the other is to predict the types of code tokens.
arXiv Detail & Related papers (2021-08-10T10:08:21Z) - GraphCodeBERT: Pre-training Code Representations with Data Flow [97.00641522327699]
We present GraphCodeBERT, a pre-trained model for programming language that considers the inherent structure of code.
We use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables.
We evaluate our model on four tasks, including code search, clone detection, code translation, and code refinement.
arXiv Detail & Related papers (2020-09-17T15:25:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.